2025-07-12 14:50:55.549925 | Job console starting 2025-07-12 14:50:55.566781 | Updating git repos 2025-07-12 14:50:55.627488 | Cloning repos into workspace 2025-07-12 14:50:55.820753 | Restoring repo states 2025-07-12 14:50:55.846453 | Merging changes 2025-07-12 14:50:55.846491 | Checking out repos 2025-07-12 14:50:56.070250 | Preparing playbooks 2025-07-12 14:50:56.762495 | Running Ansible setup 2025-07-12 14:51:01.044966 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-07-12 14:51:01.828671 | 2025-07-12 14:51:01.828842 | PLAY [Base pre] 2025-07-12 14:51:01.846178 | 2025-07-12 14:51:01.846317 | TASK [Setup log path fact] 2025-07-12 14:51:01.876225 | orchestrator | ok 2025-07-12 14:51:01.893837 | 2025-07-12 14:51:01.894040 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-07-12 14:51:01.923541 | orchestrator | ok 2025-07-12 14:51:01.935340 | 2025-07-12 14:51:01.935461 | TASK [emit-job-header : Print job information] 2025-07-12 14:51:01.987290 | # Job Information 2025-07-12 14:51:01.987498 | Ansible Version: 2.16.14 2025-07-12 14:51:01.987541 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-07-12 14:51:01.987586 | Pipeline: post 2025-07-12 14:51:01.987617 | Executor: 521e9411259a 2025-07-12 14:51:01.987643 | Triggered by: https://github.com/osism/testbed/commit/f2881b2e2749c75851d67764eac24722ef14cd3c 2025-07-12 14:51:01.987670 | Event ID: 9839603e-5f2f-11f0-9142-681d3dd81075 2025-07-12 14:51:01.996132 | 2025-07-12 14:51:01.996248 | LOOP [emit-job-header : Print node information] 2025-07-12 14:51:02.116741 | orchestrator | ok: 2025-07-12 14:51:02.117044 | orchestrator | # Node Information 2025-07-12 14:51:02.117098 | orchestrator | Inventory Hostname: orchestrator 2025-07-12 14:51:02.117124 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-07-12 14:51:02.117147 | orchestrator | Username: zuul-testbed02 2025-07-12 14:51:02.117167 | orchestrator | Distro: Debian 12.11 2025-07-12 14:51:02.117192 | orchestrator | Provider: static-testbed 2025-07-12 14:51:02.117213 | orchestrator | Region: 2025-07-12 14:51:02.117235 | orchestrator | Label: testbed-orchestrator 2025-07-12 14:51:02.117256 | orchestrator | Product Name: OpenStack Nova 2025-07-12 14:51:02.117275 | orchestrator | Interface IP: 81.163.193.140 2025-07-12 14:51:02.138751 | 2025-07-12 14:51:02.138928 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-07-12 14:51:02.610542 | orchestrator -> localhost | changed 2025-07-12 14:51:02.627487 | 2025-07-12 14:51:02.627649 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-07-12 14:51:03.683058 | orchestrator -> localhost | changed 2025-07-12 14:51:03.697814 | 2025-07-12 14:51:03.697959 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-07-12 14:51:03.984922 | orchestrator -> localhost | ok 2025-07-12 14:51:03.992182 | 2025-07-12 14:51:03.992304 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-07-12 14:51:04.021413 | orchestrator | ok 2025-07-12 14:51:04.042659 | orchestrator | included: /var/lib/zuul/builds/136d96fce2ea4ab0a333aeec44b1cc40/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-07-12 14:51:04.050881 | 2025-07-12 14:51:04.051003 | TASK [add-build-sshkey : Create Temp SSH key] 2025-07-12 14:51:05.529250 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-07-12 14:51:05.529613 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/136d96fce2ea4ab0a333aeec44b1cc40/work/136d96fce2ea4ab0a333aeec44b1cc40_id_rsa 2025-07-12 14:51:05.529677 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/136d96fce2ea4ab0a333aeec44b1cc40/work/136d96fce2ea4ab0a333aeec44b1cc40_id_rsa.pub 2025-07-12 14:51:05.529723 | orchestrator -> localhost | The key fingerprint is: 2025-07-12 14:51:05.529764 | orchestrator -> localhost | SHA256:xNKDnjldudYoFVWe8R7A+GXs0zfPxRTep1u0bslG+kk zuul-build-sshkey 2025-07-12 14:51:05.529802 | orchestrator -> localhost | The key's randomart image is: 2025-07-12 14:51:05.529852 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-07-12 14:51:05.529892 | orchestrator -> localhost | | ..+o+..| 2025-07-12 14:51:05.529929 | orchestrator -> localhost | | + + ooBo| 2025-07-12 14:51:05.529965 | orchestrator -> localhost | | o = + . *=*| 2025-07-12 14:51:05.530062 | orchestrator -> localhost | | . * + + . BO| 2025-07-12 14:51:05.530102 | orchestrator -> localhost | | = S + . .+O| 2025-07-12 14:51:05.530150 | orchestrator -> localhost | | . o =o+| 2025-07-12 14:51:05.530184 | orchestrator -> localhost | | ..E | 2025-07-12 14:51:05.530217 | orchestrator -> localhost | | = .| 2025-07-12 14:51:05.530250 | orchestrator -> localhost | | o | 2025-07-12 14:51:05.530283 | orchestrator -> localhost | +----[SHA256]-----+ 2025-07-12 14:51:05.530371 | orchestrator -> localhost | ok: Runtime: 0:00:00.949412 2025-07-12 14:51:05.541239 | 2025-07-12 14:51:05.541379 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-07-12 14:51:05.587004 | orchestrator | ok 2025-07-12 14:51:05.601724 | orchestrator | included: /var/lib/zuul/builds/136d96fce2ea4ab0a333aeec44b1cc40/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-07-12 14:51:05.615080 | 2025-07-12 14:51:05.615205 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-07-12 14:51:05.638964 | orchestrator | skipping: Conditional result was False 2025-07-12 14:51:05.652312 | 2025-07-12 14:51:05.652483 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-07-12 14:51:06.280212 | orchestrator | changed 2025-07-12 14:51:06.288293 | 2025-07-12 14:51:06.288424 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-07-12 14:51:06.567810 | orchestrator | ok 2025-07-12 14:51:06.576231 | 2025-07-12 14:51:06.576372 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-07-12 14:51:07.202258 | orchestrator | ok 2025-07-12 14:51:07.210192 | 2025-07-12 14:51:07.210335 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-07-12 14:51:07.653962 | orchestrator | ok 2025-07-12 14:51:07.660905 | 2025-07-12 14:51:07.661070 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-07-12 14:51:07.689308 | orchestrator | skipping: Conditional result was False 2025-07-12 14:51:07.704594 | 2025-07-12 14:51:07.704758 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-07-12 14:51:08.167222 | orchestrator -> localhost | changed 2025-07-12 14:51:08.181586 | 2025-07-12 14:51:08.181784 | TASK [add-build-sshkey : Add back temp key] 2025-07-12 14:51:08.529383 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/136d96fce2ea4ab0a333aeec44b1cc40/work/136d96fce2ea4ab0a333aeec44b1cc40_id_rsa (zuul-build-sshkey) 2025-07-12 14:51:08.529622 | orchestrator -> localhost | ok: Runtime: 0:00:00.017802 2025-07-12 14:51:08.537464 | 2025-07-12 14:51:08.537581 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-07-12 14:51:08.948962 | orchestrator | ok 2025-07-12 14:51:08.958210 | 2025-07-12 14:51:08.958351 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-07-12 14:51:08.992400 | orchestrator | skipping: Conditional result was False 2025-07-12 14:51:09.052701 | 2025-07-12 14:51:09.052829 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-07-12 14:51:09.468517 | orchestrator | ok 2025-07-12 14:51:09.485263 | 2025-07-12 14:51:09.485415 | TASK [validate-host : Define zuul_info_dir fact] 2025-07-12 14:51:09.514732 | orchestrator | ok 2025-07-12 14:51:09.522209 | 2025-07-12 14:51:09.522325 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-07-12 14:51:09.813335 | orchestrator -> localhost | ok 2025-07-12 14:51:09.820946 | 2025-07-12 14:51:09.821097 | TASK [validate-host : Collect information about the host] 2025-07-12 14:51:11.001853 | orchestrator | ok 2025-07-12 14:51:11.016893 | 2025-07-12 14:51:11.017072 | TASK [validate-host : Sanitize hostname] 2025-07-12 14:51:11.094266 | orchestrator | ok 2025-07-12 14:51:11.104783 | 2025-07-12 14:51:11.104944 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-07-12 14:51:11.687424 | orchestrator -> localhost | changed 2025-07-12 14:51:11.696123 | 2025-07-12 14:51:11.696271 | TASK [validate-host : Collect information about zuul worker] 2025-07-12 14:51:12.126963 | orchestrator | ok 2025-07-12 14:51:12.135747 | 2025-07-12 14:51:12.135902 | TASK [validate-host : Write out all zuul information for each host] 2025-07-12 14:51:12.669351 | orchestrator -> localhost | changed 2025-07-12 14:51:12.689097 | 2025-07-12 14:51:12.689284 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-07-12 14:51:12.980644 | orchestrator | ok 2025-07-12 14:51:12.990211 | 2025-07-12 14:51:12.990363 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-07-12 14:51:44.761129 | orchestrator | changed: 2025-07-12 14:51:44.761387 | orchestrator | .d..t...... src/ 2025-07-12 14:51:44.761432 | orchestrator | .d..t...... src/github.com/ 2025-07-12 14:51:44.761464 | orchestrator | .d..t...... src/github.com/osism/ 2025-07-12 14:51:44.761490 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-07-12 14:51:44.761516 | orchestrator | RedHat.yml 2025-07-12 14:51:44.773418 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-07-12 14:51:44.773436 | orchestrator | RedHat.yml 2025-07-12 14:51:44.773489 | orchestrator | = 1.53.0"... 2025-07-12 14:52:03.606334 | orchestrator | 14:52:03.606 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-07-12 14:52:04.617564 | orchestrator | 14:52:04.617 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-07-12 14:52:05.644597 | orchestrator | 14:52:05.643 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-07-12 14:52:06.819893 | orchestrator | 14:52:06.819 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.0... 2025-07-12 14:52:08.033562 | orchestrator | 14:52:08.033 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.0 (signed, key ID 4F80527A391BEFD2) 2025-07-12 14:52:08.593212 | orchestrator | 14:52:08.592 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-07-12 14:52:09.457901 | orchestrator | 14:52:09.457 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-07-12 14:52:09.458211 | orchestrator | 14:52:09.458 STDOUT terraform: Providers are signed by their developers. 2025-07-12 14:52:09.458225 | orchestrator | 14:52:09.458 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-07-12 14:52:09.458230 | orchestrator | 14:52:09.458 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-07-12 14:52:09.458476 | orchestrator | 14:52:09.458 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-07-12 14:52:09.458489 | orchestrator | 14:52:09.458 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-07-12 14:52:09.458496 | orchestrator | 14:52:09.458 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-07-12 14:52:09.458500 | orchestrator | 14:52:09.458 STDOUT terraform: you run "tofu init" in the future. 2025-07-12 14:52:09.459097 | orchestrator | 14:52:09.459 STDOUT terraform: OpenTofu has been successfully initialized! 2025-07-12 14:52:09.459412 | orchestrator | 14:52:09.459 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-07-12 14:52:09.459422 | orchestrator | 14:52:09.459 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-07-12 14:52:09.459427 | orchestrator | 14:52:09.459 STDOUT terraform: should now work. 2025-07-12 14:52:09.459431 | orchestrator | 14:52:09.459 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-07-12 14:52:09.459435 | orchestrator | 14:52:09.459 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-07-12 14:52:09.459440 | orchestrator | 14:52:09.459 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-07-12 14:52:09.572014 | orchestrator | 14:52:09.571 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-07-12 14:52:09.572203 | orchestrator | 14:52:09.572 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-07-12 14:52:09.834080 | orchestrator | 14:52:09.833 STDOUT terraform: Created and switched to workspace "ci"! 2025-07-12 14:52:09.834232 | orchestrator | 14:52:09.833 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-07-12 14:52:09.834250 | orchestrator | 14:52:09.833 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-07-12 14:52:09.834259 | orchestrator | 14:52:09.833 STDOUT terraform: for this configuration. 2025-07-12 14:52:09.979974 | orchestrator | 14:52:09.979 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-07-12 14:52:09.980068 | orchestrator | 14:52:09.980 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-07-12 14:52:10.098045 | orchestrator | 14:52:10.097 STDOUT terraform: ci.auto.tfvars 2025-07-12 14:52:10.102374 | orchestrator | 14:52:10.101 STDOUT terraform: default_custom.tf 2025-07-12 14:52:10.257776 | orchestrator | 14:52:10.257 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-07-12 14:52:11.138291 | orchestrator | 14:52:11.138 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-07-12 14:52:11.691723 | orchestrator | 14:52:11.691 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-07-12 14:52:11.902355 | orchestrator | 14:52:11.902 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-07-12 14:52:11.902425 | orchestrator | 14:52:11.902 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-07-12 14:52:11.902432 | orchestrator | 14:52:11.902 STDOUT terraform:  + create 2025-07-12 14:52:11.902438 | orchestrator | 14:52:11.902 STDOUT terraform:  <= read (data resources) 2025-07-12 14:52:11.902443 | orchestrator | 14:52:11.902 STDOUT terraform: OpenTofu will perform the following actions: 2025-07-12 14:52:11.902449 | orchestrator | 14:52:11.902 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-07-12 14:52:11.902483 | orchestrator | 14:52:11.902 STDOUT terraform:  # (config refers to values not yet known) 2025-07-12 14:52:11.902514 | orchestrator | 14:52:11.902 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-07-12 14:52:11.902545 | orchestrator | 14:52:11.902 STDOUT terraform:  + checksum = (known after apply) 2025-07-12 14:52:11.902573 | orchestrator | 14:52:11.902 STDOUT terraform:  + created_at = (known after apply) 2025-07-12 14:52:11.902602 | orchestrator | 14:52:11.902 STDOUT terraform:  + file = (known after apply) 2025-07-12 14:52:11.902649 | orchestrator | 14:52:11.902 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.902694 | orchestrator | 14:52:11.902 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 14:52:11.902722 | orchestrator | 14:52:11.902 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-07-12 14:52:11.902752 | orchestrator | 14:52:11.902 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-07-12 14:52:11.902763 | orchestrator | 14:52:11.902 STDOUT terraform:  + most_recent = true 2025-07-12 14:52:11.902794 | orchestrator | 14:52:11.902 STDOUT terraform:  + name = (known after apply) 2025-07-12 14:52:11.902856 | orchestrator | 14:52:11.902 STDOUT terraform:  + protected = (known after apply) 2025-07-12 14:52:11.902884 | orchestrator | 14:52:11.902 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.902914 | orchestrator | 14:52:11.902 STDOUT terraform:  + schema = (known after apply) 2025-07-12 14:52:11.902957 | orchestrator | 14:52:11.902 STDOUT terraform:  + size_bytes = (known after apply) 2025-07-12 14:52:11.902987 | orchestrator | 14:52:11.902 STDOUT terraform:  + tags = (known after apply) 2025-07-12 14:52:11.903016 | orchestrator | 14:52:11.902 STDOUT terraform:  + updated_at = (known after apply) 2025-07-12 14:52:11.903030 | orchestrator | 14:52:11.903 STDOUT terraform:  } 2025-07-12 14:52:11.903082 | orchestrator | 14:52:11.903 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-07-12 14:52:11.903106 | orchestrator | 14:52:11.903 STDOUT terraform:  # (config refers to values not yet known) 2025-07-12 14:52:11.903144 | orchestrator | 14:52:11.903 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-07-12 14:52:11.903172 | orchestrator | 14:52:11.903 STDOUT terraform:  + checksum = (known after apply) 2025-07-12 14:52:11.903200 | orchestrator | 14:52:11.903 STDOUT terraform:  + created_at = (known after apply) 2025-07-12 14:52:11.903227 | orchestrator | 14:52:11.903 STDOUT terraform:  + file = (known after apply) 2025-07-12 14:52:11.903257 | orchestrator | 14:52:11.903 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.903284 | orchestrator | 14:52:11.903 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 14:52:11.903311 | orchestrator | 14:52:11.903 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-07-12 14:52:11.903338 | orchestrator | 14:52:11.903 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-07-12 14:52:11.903374 | orchestrator | 14:52:11.903 STDOUT terraform:  + most_recent = true 2025-07-12 14:52:11.903381 | orchestrator | 14:52:11.903 STDOUT terraform:  + name = (known after apply) 2025-07-12 14:52:11.903410 | orchestrator | 14:52:11.903 STDOUT terraform:  + protected = (known after apply) 2025-07-12 14:52:11.903437 | orchestrator | 14:52:11.903 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.903465 | orchestrator | 14:52:11.903 STDOUT terraform:  + schema = (known after apply) 2025-07-12 14:52:11.903493 | orchestrator | 14:52:11.903 STDOUT terraform:  + size_bytes = (known after apply) 2025-07-12 14:52:11.903519 | orchestrator | 14:52:11.903 STDOUT terraform:  + tags = (known after apply) 2025-07-12 14:52:11.903547 | orchestrator | 14:52:11.903 STDOUT terraform:  + updated_at = (known after apply) 2025-07-12 14:52:11.903553 | orchestrator | 14:52:11.903 STDOUT terraform:  } 2025-07-12 14:52:11.903586 | orchestrator | 14:52:11.903 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-07-12 14:52:11.903615 | orchestrator | 14:52:11.903 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-07-12 14:52:11.903649 | orchestrator | 14:52:11.903 STDOUT terraform:  + content = (known after apply) 2025-07-12 14:52:11.903682 | orchestrator | 14:52:11.903 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-12 14:52:11.903715 | orchestrator | 14:52:11.903 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-12 14:52:11.903751 | orchestrator | 14:52:11.903 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-12 14:52:11.903788 | orchestrator | 14:52:11.903 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-12 14:52:11.903833 | orchestrator | 14:52:11.903 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-12 14:52:11.903877 | orchestrator | 14:52:11.903 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-12 14:52:11.903900 | orchestrator | 14:52:11.903 STDOUT terraform:  + directory_permission = "0777" 2025-07-12 14:52:11.903925 | orchestrator | 14:52:11.903 STDOUT terraform:  + file_permission = "0644" 2025-07-12 14:52:11.903960 | orchestrator | 14:52:11.903 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-07-12 14:52:11.903994 | orchestrator | 14:52:11.903 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.904009 | orchestrator | 14:52:11.903 STDOUT terraform:  } 2025-07-12 14:52:11.904042 | orchestrator | 14:52:11.904 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-07-12 14:52:11.904061 | orchestrator | 14:52:11.904 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-07-12 14:52:11.904096 | orchestrator | 14:52:11.904 STDOUT terraform:  + content = (known after apply) 2025-07-12 14:52:11.904137 | orchestrator | 14:52:11.904 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-12 14:52:11.904185 | orchestrator | 14:52:11.904 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-12 14:52:11.904211 | orchestrator | 14:52:11.904 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-12 14:52:11.904246 | orchestrator | 14:52:11.904 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-12 14:52:11.904281 | orchestrator | 14:52:11.904 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-12 14:52:11.904315 | orchestrator | 14:52:11.904 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-12 14:52:11.904361 | orchestrator | 14:52:11.904 STDOUT terraform:  + directory_permission = "0777" 2025-07-12 14:52:11.904382 | orchestrator | 14:52:11.904 STDOUT terraform:  + file_permission = "0644" 2025-07-12 14:52:11.904410 | orchestrator | 14:52:11.904 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-07-12 14:52:11.904445 | orchestrator | 14:52:11.904 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.904459 | orchestrator | 14:52:11.904 STDOUT terraform:  } 2025-07-12 14:52:11.904487 | orchestrator | 14:52:11.904 STDOUT terraform:  # local_file.inventory will be created 2025-07-12 14:52:11.904518 | orchestrator | 14:52:11.904 STDOUT terraform:  + resource "local_file" "inventory" { 2025-07-12 14:52:11.904557 | orchestrator | 14:52:11.904 STDOUT terraform:  + content = (known after apply) 2025-07-12 14:52:11.904592 | orchestrator | 14:52:11.904 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-12 14:52:11.904626 | orchestrator | 14:52:11.904 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-12 14:52:11.904661 | orchestrator | 14:52:11.904 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-12 14:52:11.904695 | orchestrator | 14:52:11.904 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-12 14:52:11.904738 | orchestrator | 14:52:11.904 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-12 14:52:11.904763 | orchestrator | 14:52:11.904 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-12 14:52:11.904787 | orchestrator | 14:52:11.904 STDOUT terraform:  + directory_permission = "0777" 2025-07-12 14:52:11.904854 | orchestrator | 14:52:11.904 STDOUT terraform:  + file_permission = "0644" 2025-07-12 14:52:11.904886 | orchestrator | 14:52:11.904 STDOUT terraform:  + filename = "inventory.ci" 2025-07-12 14:52:11.904955 | orchestrator | 14:52:11.904 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.904970 | orchestrator | 14:52:11.904 STDOUT terraform:  } 2025-07-12 14:52:11.905003 | orchestrator | 14:52:11.904 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-07-12 14:52:11.905032 | orchestrator | 14:52:11.905 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-07-12 14:52:11.905066 | orchestrator | 14:52:11.905 STDOUT terraform:  + content = (sensitive value) 2025-07-12 14:52:11.905098 | orchestrator | 14:52:11.905 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-12 14:52:11.905142 | orchestrator | 14:52:11.905 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-12 14:52:11.905169 | orchestrator | 14:52:11.905 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-12 14:52:11.905205 | orchestrator | 14:52:11.905 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-12 14:52:11.905238 | orchestrator | 14:52:11.905 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-12 14:52:11.905271 | orchestrator | 14:52:11.905 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-12 14:52:11.905296 | orchestrator | 14:52:11.905 STDOUT terraform:  + directory_permission = "0700" 2025-07-12 14:52:11.905332 | orchestrator | 14:52:11.905 STDOUT terraform:  + file_permission = "0600" 2025-07-12 14:52:11.905361 | orchestrator | 14:52:11.905 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-07-12 14:52:11.905398 | orchestrator | 14:52:11.905 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.905405 | orchestrator | 14:52:11.905 STDOUT terraform:  } 2025-07-12 14:52:11.905435 | orchestrator | 14:52:11.905 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-07-12 14:52:11.905465 | orchestrator | 14:52:11.905 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-07-12 14:52:11.905487 | orchestrator | 14:52:11.905 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.905493 | orchestrator | 14:52:11.905 STDOUT terraform:  } 2025-07-12 14:52:11.905546 | orchestrator | 14:52:11.905 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-07-12 14:52:11.905593 | orchestrator | 14:52:11.905 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-07-12 14:52:11.905622 | orchestrator | 14:52:11.905 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 14:52:11.905645 | orchestrator | 14:52:11.905 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 14:52:11.905683 | orchestrator | 14:52:11.905 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.905715 | orchestrator | 14:52:11.905 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 14:52:11.905783 | orchestrator | 14:52:11.905 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 14:52:11.905880 | orchestrator | 14:52:11.905 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-07-12 14:52:11.905933 | orchestrator | 14:52:11.905 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.905982 | orchestrator | 14:52:11.905 STDOUT terraform:  + size = 80 2025-07-12 14:52:11.906008 | orchestrator | 14:52:11.905 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 14:52:11.906176 | orchestrator | 14:52:11.906 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 14:52:11.906251 | orchestrator | 14:52:11.906 STDOUT terraform:  } 2025-07-12 14:52:11.906322 | orchestrator | 14:52:11.906 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-07-12 14:52:11.906370 | orchestrator | 14:52:11.906 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-12 14:52:11.906412 | orchestrator | 14:52:11.906 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 14:52:11.906452 | orchestrator | 14:52:11.906 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 14:52:11.906489 | orchestrator | 14:52:11.906 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.906525 | orchestrator | 14:52:11.906 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 14:52:11.906560 | orchestrator | 14:52:11.906 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 14:52:11.906607 | orchestrator | 14:52:11.906 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-07-12 14:52:11.906656 | orchestrator | 14:52:11.906 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.906676 | orchestrator | 14:52:11.906 STDOUT terraform:  + size = 80 2025-07-12 14:52:11.906702 | orchestrator | 14:52:11.906 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 14:52:11.906728 | orchestrator | 14:52:11.906 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 14:52:11.906742 | orchestrator | 14:52:11.906 STDOUT terraform:  } 2025-07-12 14:52:11.906790 | orchestrator | 14:52:11.906 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-07-12 14:52:11.906851 | orchestrator | 14:52:11.906 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-12 14:52:11.906888 | orchestrator | 14:52:11.906 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 14:52:11.906912 | orchestrator | 14:52:11.906 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 14:52:11.906947 | orchestrator | 14:52:11.906 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.906992 | orchestrator | 14:52:11.906 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 14:52:11.907034 | orchestrator | 14:52:11.906 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 14:52:11.907078 | orchestrator | 14:52:11.907 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-07-12 14:52:11.907116 | orchestrator | 14:52:11.907 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.907138 | orchestrator | 14:52:11.907 STDOUT terraform:  + size = 80 2025-07-12 14:52:11.907170 | orchestrator | 14:52:11.907 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 14:52:11.907191 | orchestrator | 14:52:11.907 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 14:52:11.907197 | orchestrator | 14:52:11.907 STDOUT terraform:  } 2025-07-12 14:52:11.907245 | orchestrator | 14:52:11.907 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-07-12 14:52:11.907292 | orchestrator | 14:52:11.907 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-12 14:52:11.907327 | orchestrator | 14:52:11.907 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 14:52:11.907365 | orchestrator | 14:52:11.907 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 14:52:11.907390 | orchestrator | 14:52:11.907 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.907426 | orchestrator | 14:52:11.907 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 14:52:11.907462 | orchestrator | 14:52:11.907 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 14:52:11.907506 | orchestrator | 14:52:11.907 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-07-12 14:52:11.907543 | orchestrator | 14:52:11.907 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.907565 | orchestrator | 14:52:11.907 STDOUT terraform:  + size = 80 2025-07-12 14:52:11.907587 | orchestrator | 14:52:11.907 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 14:52:11.907611 | orchestrator | 14:52:11.907 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 14:52:11.907617 | orchestrator | 14:52:11.907 STDOUT terraform:  } 2025-07-12 14:52:11.907666 | orchestrator | 14:52:11.907 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-07-12 14:52:11.907734 | orchestrator | 14:52:11.907 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-12 14:52:11.907762 | orchestrator | 14:52:11.907 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 14:52:11.907788 | orchestrator | 14:52:11.907 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 14:52:11.907837 | orchestrator | 14:52:11.907 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.907875 | orchestrator | 14:52:11.907 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 14:52:11.907913 | orchestrator | 14:52:11.907 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 14:52:11.907952 | orchestrator | 14:52:11.907 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-07-12 14:52:11.907998 | orchestrator | 14:52:11.907 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.908021 | orchestrator | 14:52:11.907 STDOUT terraform:  + size = 80 2025-07-12 14:52:11.908046 | orchestrator | 14:52:11.908 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 14:52:11.908072 | orchestrator | 14:52:11.908 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 14:52:11.908079 | orchestrator | 14:52:11.908 STDOUT terraform:  } 2025-07-12 14:52:11.908130 | orchestrator | 14:52:11.908 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-07-12 14:52:11.908225 | orchestrator | 14:52:11.908 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-12 14:52:11.908268 | orchestrator | 14:52:11.908 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 14:52:11.908293 | orchestrator | 14:52:11.908 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 14:52:11.908331 | orchestrator | 14:52:11.908 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.908369 | orchestrator | 14:52:11.908 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 14:52:11.908408 | orchestrator | 14:52:11.908 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 14:52:11.908455 | orchestrator | 14:52:11.908 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-07-12 14:52:11.908488 | orchestrator | 14:52:11.908 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.908514 | orchestrator | 14:52:11.908 STDOUT terraform:  + size = 80 2025-07-12 14:52:11.908534 | orchestrator | 14:52:11.908 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 14:52:11.908561 | orchestrator | 14:52:11.908 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 14:52:11.908575 | orchestrator | 14:52:11.908 STDOUT terraform:  } 2025-07-12 14:52:11.908622 | orchestrator | 14:52:11.908 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-07-12 14:52:11.908669 | orchestrator | 14:52:11.908 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-12 14:52:11.908703 | orchestrator | 14:52:11.908 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 14:52:11.908727 | orchestrator | 14:52:11.908 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 14:52:11.908764 | orchestrator | 14:52:11.908 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.908801 | orchestrator | 14:52:11.908 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 14:52:11.908851 | orchestrator | 14:52:11.908 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 14:52:11.908899 | orchestrator | 14:52:11.908 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-07-12 14:52:11.908933 | orchestrator | 14:52:11.908 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.908955 | orchestrator | 14:52:11.908 STDOUT terraform:  + size = 80 2025-07-12 14:52:11.908981 | orchestrator | 14:52:11.908 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 14:52:11.909006 | orchestrator | 14:52:11.908 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 14:52:11.909029 | orchestrator | 14:52:11.909 STDOUT terraform:  } 2025-07-12 14:52:11.909081 | orchestrator | 14:52:11.909 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-07-12 14:52:11.909125 | orchestrator | 14:52:11.909 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 14:52:11.909165 | orchestrator | 14:52:11.909 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 14:52:11.909186 | orchestrator | 14:52:11.909 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 14:52:11.909224 | orchestrator | 14:52:11.909 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.909271 | orchestrator | 14:52:11.909 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 14:52:11.909311 | orchestrator | 14:52:11.909 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-07-12 14:52:11.909347 | orchestrator | 14:52:11.909 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.909370 | orchestrator | 14:52:11.909 STDOUT terraform:  + size = 20 2025-07-12 14:52:11.909401 | orchestrator | 14:52:11.909 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 14:52:11.909421 | orchestrator | 14:52:11.909 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 14:52:11.909427 | orchestrator | 14:52:11.909 STDOUT terraform:  } 2025-07-12 14:52:11.909474 | orchestrator | 14:52:11.909 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-07-12 14:52:11.909515 | orchestrator | 14:52:11.909 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 14:52:11.909557 | orchestrator | 14:52:11.909 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 14:52:11.909593 | orchestrator | 14:52:11.909 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 14:52:11.909626 | orchestrator | 14:52:11.909 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.909665 | orchestrator | 14:52:11.909 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 14:52:11.909705 | orchestrator | 14:52:11.909 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-07-12 14:52:11.909741 | orchestrator | 14:52:11.909 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.909787 | orchestrator | 14:52:11.909 STDOUT terraform:  + size = 20 2025-07-12 14:52:11.909842 | orchestrator | 14:52:11.909 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 14:52:11.909867 | orchestrator | 14:52:11.909 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 14:52:11.909882 | orchestrator | 14:52:11.909 STDOUT terraform:  } 2025-07-12 14:52:11.909937 | orchestrator | 14:52:11.909 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-07-12 14:52:11.909972 | orchestrator | 14:52:11.909 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 14:52:11.910008 | orchestrator | 14:52:11.909 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 14:52:11.910048 | orchestrator | 14:52:11.910 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 14:52:11.910088 | orchestrator | 14:52:11.910 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.910122 | orchestrator | 14:52:11.910 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 14:52:11.910162 | orchestrator | 14:52:11.910 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-07-12 14:52:11.910199 | orchestrator | 14:52:11.910 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.910226 | orchestrator | 14:52:11.910 STDOUT terraform:  + size = 20 2025-07-12 14:52:11.910255 | orchestrator | 14:52:11.910 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 14:52:11.910276 | orchestrator | 14:52:11.910 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 14:52:11.910282 | orchestrator | 14:52:11.910 STDOUT terraform:  } 2025-07-12 14:52:11.910330 | orchestrator | 14:52:11.910 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-07-12 14:52:11.910371 | orchestrator | 14:52:11.910 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 14:52:11.910405 | orchestrator | 14:52:11.910 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 14:52:11.910428 | orchestrator | 14:52:11.910 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 14:52:11.910462 | orchestrator | 14:52:11.910 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.910497 | orchestrator | 14:52:11.910 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 14:52:11.910534 | orchestrator | 14:52:11.910 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-07-12 14:52:11.910568 | orchestrator | 14:52:11.910 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.910588 | orchestrator | 14:52:11.910 STDOUT terraform:  + size = 20 2025-07-12 14:52:11.910612 | orchestrator | 14:52:11.910 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 14:52:11.910635 | orchestrator | 14:52:11.910 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 14:52:11.910642 | orchestrator | 14:52:11.910 STDOUT terraform:  } 2025-07-12 14:52:11.910701 | orchestrator | 14:52:11.910 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-07-12 14:52:11.910741 | orchestrator | 14:52:11.910 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 14:52:11.910775 | orchestrator | 14:52:11.910 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 14:52:11.910819 | orchestrator | 14:52:11.910 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 14:52:11.910864 | orchestrator | 14:52:11.910 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.910900 | orchestrator | 14:52:11.910 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 14:52:11.910938 | orchestrator | 14:52:11.910 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-07-12 14:52:11.910977 | orchestrator | 14:52:11.910 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.910990 | orchestrator | 14:52:11.910 STDOUT terraform:  + size = 20 2025-07-12 14:52:11.911012 | orchestrator | 14:52:11.910 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 14:52:11.911035 | orchestrator | 14:52:11.911 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 14:52:11.911050 | orchestrator | 14:52:11.911 STDOUT terraform:  } 2025-07-12 14:52:11.911094 | orchestrator | 14:52:11.911 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-07-12 14:52:11.911138 | orchestrator | 14:52:11.911 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 14:52:11.911174 | orchestrator | 14:52:11.911 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 14:52:11.911199 | orchestrator | 14:52:11.911 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 14:52:11.911235 | orchestrator | 14:52:11.911 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.911268 | orchestrator | 14:52:11.911 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 14:52:11.911305 | orchestrator | 14:52:11.911 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-07-12 14:52:11.911340 | orchestrator | 14:52:11.911 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.911363 | orchestrator | 14:52:11.911 STDOUT terraform:  + size = 20 2025-07-12 14:52:11.911383 | orchestrator | 14:52:11.911 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 14:52:11.911407 | orchestrator | 14:52:11.911 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 14:52:11.911414 | orchestrator | 14:52:11.911 STDOUT terraform:  } 2025-07-12 14:52:11.911465 | orchestrator | 14:52:11.911 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-07-12 14:52:11.911506 | orchestrator | 14:52:11.911 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 14:52:11.911541 | orchestrator | 14:52:11.911 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 14:52:11.911563 | orchestrator | 14:52:11.911 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 14:52:11.911598 | orchestrator | 14:52:11.911 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.911632 | orchestrator | 14:52:11.911 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 14:52:11.911683 | orchestrator | 14:52:11.911 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-07-12 14:52:11.911720 | orchestrator | 14:52:11.911 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.911740 | orchestrator | 14:52:11.911 STDOUT terraform:  + size = 20 2025-07-12 14:52:11.911762 | orchestrator | 14:52:11.911 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 14:52:11.911785 | orchestrator | 14:52:11.911 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 14:52:11.911800 | orchestrator | 14:52:11.911 STDOUT terraform:  } 2025-07-12 14:52:11.911853 | orchestrator | 14:52:11.911 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-07-12 14:52:11.911895 | orchestrator | 14:52:11.911 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 14:52:11.911927 | orchestrator | 14:52:11.911 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 14:52:11.911951 | orchestrator | 14:52:11.911 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 14:52:11.911987 | orchestrator | 14:52:11.911 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.912021 | orchestrator | 14:52:11.911 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 14:52:11.912059 | orchestrator | 14:52:11.912 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-07-12 14:52:11.912094 | orchestrator | 14:52:11.912 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.912118 | orchestrator | 14:52:11.912 STDOUT terraform:  + size = 20 2025-07-12 14:52:11.912142 | orchestrator | 14:52:11.912 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 14:52:11.912167 | orchestrator | 14:52:11.912 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 14:52:11.912173 | orchestrator | 14:52:11.912 STDOUT terraform:  } 2025-07-12 14:52:11.912218 | orchestrator | 14:52:11.912 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-07-12 14:52:11.912258 | orchestrator | 14:52:11.912 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 14:52:11.912293 | orchestrator | 14:52:11.912 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 14:52:11.912316 | orchestrator | 14:52:11.912 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 14:52:11.912351 | orchestrator | 14:52:11.912 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.912387 | orchestrator | 14:52:11.912 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 14:52:11.912424 | orchestrator | 14:52:11.912 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-07-12 14:52:11.912459 | orchestrator | 14:52:11.912 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.912479 | orchestrator | 14:52:11.912 STDOUT terraform:  + size = 20 2025-07-12 14:52:11.912502 | orchestrator | 14:52:11.912 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 14:52:11.912525 | orchestrator | 14:52:11.912 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 14:52:11.912531 | orchestrator | 14:52:11.912 STDOUT terraform:  } 2025-07-12 14:52:11.912582 | orchestrator | 14:52:11.912 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-07-12 14:52:11.912617 | orchestrator | 14:52:11.912 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-07-12 14:52:11.912651 | orchestrator | 14:52:11.912 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-12 14:52:11.912683 | orchestrator | 14:52:11.912 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-12 14:52:11.912718 | orchestrator | 14:52:11.912 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-12 14:52:11.912752 | orchestrator | 14:52:11.912 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 14:52:11.912773 | orchestrator | 14:52:11.912 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 14:52:11.912794 | orchestrator | 14:52:11.912 STDOUT terraform:  + config_drive = true 2025-07-12 14:52:11.912838 | orchestrator | 14:52:11.912 STDOUT terraform:  + created = (known after apply) 2025-07-12 14:52:11.912871 | orchestrator | 14:52:11.912 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-12 14:52:11.912900 | orchestrator | 14:52:11.912 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-07-12 14:52:11.912922 | orchestrator | 14:52:11.912 STDOUT terraform:  + force_delete = false 2025-07-12 14:52:11.912956 | orchestrator | 14:52:11.912 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-12 14:52:11.912990 | orchestrator | 14:52:11.912 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.913023 | orchestrator | 14:52:11.912 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 14:52:11.913058 | orchestrator | 14:52:11.913 STDOUT terraform:  + image_name = (known after apply) 2025-07-12 14:52:11.913083 | orchestrator | 14:52:11.913 STDOUT terraform:  + key_pair = "testbed" 2025-07-12 14:52:11.913113 | orchestrator | 14:52:11.913 STDOUT terraform:  + name = "testbed-manager" 2025-07-12 14:52:11.913137 | orchestrator | 14:52:11.913 STDOUT terraform:  + power_state = "active" 2025-07-12 14:52:11.913171 | orchestrator | 14:52:11.913 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.913205 | orchestrator | 14:52:11.913 STDOUT terraform:  + security_groups = (known after apply) 2025-07-12 14:52:11.913228 | orchestrator | 14:52:11.913 STDOUT terraform:  + stop_before_destroy = false 2025-07-12 14:52:11.913262 | orchestrator | 14:52:11.913 STDOUT terraform:  + updated = (known after apply) 2025-07-12 14:52:11.913292 | orchestrator | 14:52:11.913 STDOUT terraform:  + user_data = (sensitive value) 2025-07-12 14:52:11.913310 | orchestrator | 14:52:11.913 STDOUT terraform:  + block_device { 2025-07-12 14:52:11.913335 | orchestrator | 14:52:11.913 STDOUT terraform:  + boot_index = 0 2025-07-12 14:52:11.913362 | orchestrator | 14:52:11.913 STDOUT terraform:  + delete_on_termination = false 2025-07-12 14:52:11.913390 | orchestrator | 14:52:11.913 STDOUT terraform:  + destination_type = "volume" 2025-07-12 14:52:11.913418 | orchestrator | 14:52:11.913 STDOUT terraform:  + multiattach = false 2025-07-12 14:52:11.913446 | orchestrator | 14:52:11.913 STDOUT terraform:  + source_type = "volume" 2025-07-12 14:52:11.913483 | orchestrator | 14:52:11.913 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 14:52:11.913497 | orchestrator | 14:52:11.913 STDOUT terraform:  } 2025-07-12 14:52:11.913512 | orchestrator | 14:52:11.913 STDOUT terraform:  + network { 2025-07-12 14:52:11.913532 | orchestrator | 14:52:11.913 STDOUT terraform:  + access_network = false 2025-07-12 14:52:11.913561 | orchestrator | 14:52:11.913 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-12 14:52:11.913590 | orchestrator | 14:52:11.913 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-12 14:52:11.913621 | orchestrator | 14:52:11.913 STDOUT terraform:  + mac = (known after apply) 2025-07-12 14:52:11.913652 | orchestrator | 14:52:11.913 STDOUT terraform:  + name = (known after apply) 2025-07-12 14:52:11.913682 | orchestrator | 14:52:11.913 STDOUT terraform:  + port = (known after apply) 2025-07-12 14:52:11.913711 | orchestrator | 14:52:11.913 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 14:52:11.913727 | orchestrator | 14:52:11.913 STDOUT terraform:  } 2025-07-12 14:52:11.913734 | orchestrator | 14:52:11.913 STDOUT terraform:  } 2025-07-12 14:52:11.913778 | orchestrator | 14:52:11.913 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-07-12 14:52:11.913844 | orchestrator | 14:52:11.913 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-12 14:52:11.913874 | orchestrator | 14:52:11.913 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-12 14:52:11.913915 | orchestrator | 14:52:11.913 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-12 14:52:11.913957 | orchestrator | 14:52:11.913 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-12 14:52:11.914004 | orchestrator | 14:52:11.913 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 14:52:11.914052 | orchestrator | 14:52:11.914 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 14:52:11.914080 | orchestrator | 14:52:11.914 STDOUT terraform:  + config_drive = true 2025-07-12 14:52:11.914128 | orchestrator | 14:52:11.914 STDOUT terraform:  + created = (known after apply) 2025-07-12 14:52:11.914180 | orchestrator | 14:52:11.914 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-12 14:52:11.914235 | orchestrator | 14:52:11.914 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-12 14:52:11.914269 | orchestrator | 14:52:11.914 STDOUT terraform:  + force_delete = false 2025-07-12 14:52:11.914323 | orchestrator | 14:52:11.914 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-12 14:52:11.914360 | orchestrator | 14:52:11.914 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.914395 | orchestrator | 14:52:11.914 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 14:52:11.914429 | orchestrator | 14:52:11.914 STDOUT terraform:  + image_name = (known after apply) 2025-07-12 14:52:11.914456 | orchestrator | 14:52:11.914 STDOUT terraform:  + key_pair = "testbed" 2025-07-12 14:52:11.914485 | orchestrator | 14:52:11.914 STDOUT terraform:  + name = "testbed-node-0" 2025-07-12 14:52:11.914518 | orchestrator | 14:52:11.914 STDOUT terraform:  + power_state = "active" 2025-07-12 14:52:11.914577 | orchestrator | 14:52:11.914 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.914622 | orchestrator | 14:52:11.914 STDOUT terraform:  + security_groups = (known after apply) 2025-07-12 14:52:11.914646 | orchestrator | 14:52:11.914 STDOUT terraform:  + stop_before_destroy = false 2025-07-12 14:52:11.914684 | orchestrator | 14:52:11.914 STDOUT terraform:  + updated = (known after apply) 2025-07-12 14:52:11.914737 | orchestrator | 14:52:11.914 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-12 14:52:11.914753 | orchestrator | 14:52:11.914 STDOUT terraform:  + block_device { 2025-07-12 14:52:11.914779 | orchestrator | 14:52:11.914 STDOUT terraform:  + boot_index = 0 2025-07-12 14:52:11.914821 | orchestrator | 14:52:11.914 STDOUT terraform:  + delete_on_termination = false 2025-07-12 14:52:11.914848 | orchestrator | 14:52:11.914 STDOUT terraform:  + destination_type = "volume" 2025-07-12 14:52:11.914875 | orchestrator | 14:52:11.914 STDOUT terraform:  + multiattach = false 2025-07-12 14:52:11.914904 | orchestrator | 14:52:11.914 STDOUT terraform:  + source_type = "volume" 2025-07-12 14:52:11.914943 | orchestrator | 14:52:11.914 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 14:52:11.914957 | orchestrator | 14:52:11.914 STDOUT terraform:  } 2025-07-12 14:52:11.914973 | orchestrator | 14:52:11.914 STDOUT terraform:  + network { 2025-07-12 14:52:11.914993 | orchestrator | 14:52:11.914 STDOUT terraform:  + access_network = false 2025-07-12 14:52:11.915024 | orchestrator | 14:52:11.914 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-12 14:52:11.915053 | orchestrator | 14:52:11.915 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-12 14:52:11.915085 | orchestrator | 14:52:11.915 STDOUT terraform:  + mac = (known after apply) 2025-07-12 14:52:11.915116 | orchestrator | 14:52:11.915 STDOUT terraform:  + name = (known after apply) 2025-07-12 14:52:11.915146 | orchestrator | 14:52:11.915 STDOUT terraform:  + port = (known after apply) 2025-07-12 14:52:11.915175 | orchestrator | 14:52:11.915 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 14:52:11.915189 | orchestrator | 14:52:11.915 STDOUT terraform:  } 2025-07-12 14:52:11.915204 | orchestrator | 14:52:11.915 STDOUT terraform:  } 2025-07-12 14:52:11.915248 | orchestrator | 14:52:11.915 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-07-12 14:52:11.915289 | orchestrator | 14:52:11.915 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-12 14:52:11.915323 | orchestrator | 14:52:11.915 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-12 14:52:11.915358 | orchestrator | 14:52:11.915 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-12 14:52:11.915392 | orchestrator | 14:52:11.915 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-12 14:52:11.915426 | orchestrator | 14:52:11.915 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 14:52:11.915449 | orchestrator | 14:52:11.915 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 14:52:11.915469 | orchestrator | 14:52:11.915 STDOUT terraform:  + config_drive = true 2025-07-12 14:52:11.915504 | orchestrator | 14:52:11.915 STDOUT terraform:  + created = (known after apply) 2025-07-12 14:52:11.915537 | orchestrator | 14:52:11.915 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-12 14:52:11.915565 | orchestrator | 14:52:11.915 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-12 14:52:11.915587 | orchestrator | 14:52:11.915 STDOUT terraform:  + force_delete = false 2025-07-12 14:52:11.915626 | orchestrator | 14:52:11.915 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-12 14:52:11.915655 | orchestrator | 14:52:11.915 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.915690 | orchestrator | 14:52:11.915 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 14:52:11.915723 | orchestrator | 14:52:11.915 STDOUT terraform:  + image_name = (known after apply) 2025-07-12 14:52:11.915748 | orchestrator | 14:52:11.915 STDOUT terraform:  + key_pair = "testbed" 2025-07-12 14:52:11.915777 | orchestrator | 14:52:11.915 STDOUT terraform:  + name = "testbed-node-1" 2025-07-12 14:52:11.915801 | orchestrator | 14:52:11.915 STDOUT terraform:  + power_state = "active" 2025-07-12 14:52:11.915844 | orchestrator | 14:52:11.915 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.915873 | orchestrator | 14:52:11.915 STDOUT terraform:  + security_groups = (known after apply) 2025-07-12 14:52:11.915894 | orchestrator | 14:52:11.915 STDOUT terraform:  + stop_before_destroy = false 2025-07-12 14:52:11.915928 | orchestrator | 14:52:11.915 STDOUT terraform:  + updated = (known after apply) 2025-07-12 14:52:11.915975 | orchestrator | 14:52:11.915 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-12 14:52:11.915992 | orchestrator | 14:52:11.915 STDOUT terraform:  + block_device { 2025-07-12 14:52:11.916016 | orchestrator | 14:52:11.915 STDOUT terraform:  + boot_index = 0 2025-07-12 14:52:11.916046 | orchestrator | 14:52:11.916 STDOUT terraform:  + delete_on_termination = false 2025-07-12 14:52:11.916071 | orchestrator | 14:52:11.916 STDOUT terraform:  + destination_type = "volume" 2025-07-12 14:52:11.916098 | orchestrator | 14:52:11.916 STDOUT terraform:  + multiattach = false 2025-07-12 14:52:11.916127 | orchestrator | 14:52:11.916 STDOUT terraform:  + source_type = "volume" 2025-07-12 14:52:11.916164 | orchestrator | 14:52:11.916 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 14:52:11.916171 | orchestrator | 14:52:11.916 STDOUT terraform:  } 2025-07-12 14:52:11.916188 | orchestrator | 14:52:11.916 STDOUT terraform:  + network { 2025-07-12 14:52:11.916208 | orchestrator | 14:52:11.916 STDOUT terraform:  + access_network = false 2025-07-12 14:52:11.916238 | orchestrator | 14:52:11.916 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-12 14:52:11.916267 | orchestrator | 14:52:11.916 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-12 14:52:11.916299 | orchestrator | 14:52:11.916 STDOUT terraform:  + mac = (known after apply) 2025-07-12 14:52:11.916331 | orchestrator | 14:52:11.916 STDOUT terraform:  + name = (known after apply) 2025-07-12 14:52:11.916361 | orchestrator | 14:52:11.916 STDOUT terraform:  + port = (known after apply) 2025-07-12 14:52:11.916390 | orchestrator | 14:52:11.916 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 14:52:11.916405 | orchestrator | 14:52:11.916 STDOUT terraform:  } 2025-07-12 14:52:11.916419 | orchestrator | 14:52:11.916 STDOUT terraform:  } 2025-07-12 14:52:11.916459 | orchestrator | 14:52:11.916 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-07-12 14:52:11.916499 | orchestrator | 14:52:11.916 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-12 14:52:11.916533 | orchestrator | 14:52:11.916 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-12 14:52:11.916567 | orchestrator | 14:52:11.916 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-12 14:52:11.916599 | orchestrator | 14:52:11.916 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-12 14:52:11.916633 | orchestrator | 14:52:11.916 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 14:52:11.916656 | orchestrator | 14:52:11.916 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 14:52:11.916675 | orchestrator | 14:52:11.916 STDOUT terraform:  + config_drive = true 2025-07-12 14:52:11.916719 | orchestrator | 14:52:11.916 STDOUT terraform:  + created = (known after apply) 2025-07-12 14:52:11.916753 | orchestrator | 14:52:11.916 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-12 14:52:11.916781 | orchestrator | 14:52:11.916 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-12 14:52:11.916814 | orchestrator | 14:52:11.916 STDOUT terraform:  + force_delete = false 2025-07-12 14:52:11.916943 | orchestrator | 14:52:11.916 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-12 14:52:11.916980 | orchestrator | 14:52:11.916 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.917014 | orchestrator | 14:52:11.916 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 14:52:11.917049 | orchestrator | 14:52:11.917 STDOUT terraform:  + image_name = (known after apply) 2025-07-12 14:52:11.917075 | orchestrator | 14:52:11.917 STDOUT terraform:  + key_pair = "testbed" 2025-07-12 14:52:11.917105 | orchestrator | 14:52:11.917 STDOUT terraform:  + name = "testbed-node-2" 2025-07-12 14:52:11.917129 | orchestrator | 14:52:11.917 STDOUT terraform:  + power_state = "active" 2025-07-12 14:52:11.917163 | orchestrator | 14:52:11.917 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.917196 | orchestrator | 14:52:11.917 STDOUT terraform:  + security_groups = (known after apply) 2025-07-12 14:52:11.917219 | orchestrator | 14:52:11.917 STDOUT terraform:  + stop_before_destroy = false 2025-07-12 14:52:11.917253 | orchestrator | 14:52:11.917 STDOUT terraform:  + updated = (known after apply) 2025-07-12 14:52:11.917301 | orchestrator | 14:52:11.917 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-12 14:52:11.917316 | orchestrator | 14:52:11.917 STDOUT terraform:  + block_device { 2025-07-12 14:52:11.917340 | orchestrator | 14:52:11.917 STDOUT terraform:  + boot_index = 0 2025-07-12 14:52:11.917368 | orchestrator | 14:52:11.917 STDOUT terraform:  + delete_on_termination = false 2025-07-12 14:52:11.917400 | orchestrator | 14:52:11.917 STDOUT terraform:  + destination_type = "volume" 2025-07-12 14:52:11.917423 | orchestrator | 14:52:11.917 STDOUT terraform:  + multiattach = false 2025-07-12 14:52:11.917453 | orchestrator | 14:52:11.917 STDOUT terraform:  + source_type = "volume" 2025-07-12 14:52:11.917490 | orchestrator | 14:52:11.917 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 14:52:11.917508 | orchestrator | 14:52:11.917 STDOUT terraform:  } 2025-07-12 14:52:11.917513 | orchestrator | 14:52:11.917 STDOUT terraform:  + network { 2025-07-12 14:52:11.917531 | orchestrator | 14:52:11.917 STDOUT terraform:  + access_network = false 2025-07-12 14:52:11.917561 | orchestrator | 14:52:11.917 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-12 14:52:11.917591 | orchestrator | 14:52:11.917 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-12 14:52:11.917621 | orchestrator | 14:52:11.917 STDOUT terraform:  + mac = (known after apply) 2025-07-12 14:52:11.917651 | orchestrator | 14:52:11.917 STDOUT terraform:  + name = (known after apply) 2025-07-12 14:52:11.917681 | orchestrator | 14:52:11.917 STDOUT terraform:  + port = (known after apply) 2025-07-12 14:52:11.917711 | orchestrator | 14:52:11.917 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 14:52:11.917726 | orchestrator | 14:52:11.917 STDOUT terraform:  } 2025-07-12 14:52:11.917733 | orchestrator | 14:52:11.917 STDOUT terraform:  } 2025-07-12 14:52:11.917775 | orchestrator | 14:52:11.917 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-07-12 14:52:11.917845 | orchestrator | 14:52:11.917 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-12 14:52:11.917880 | orchestrator | 14:52:11.917 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-12 14:52:11.917915 | orchestrator | 14:52:11.917 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-12 14:52:11.917949 | orchestrator | 14:52:11.917 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-12 14:52:11.917984 | orchestrator | 14:52:11.917 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 14:52:11.918009 | orchestrator | 14:52:11.917 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 14:52:11.918038 | orchestrator | 14:52:11.918 STDOUT terraform:  + config_drive = true 2025-07-12 14:52:11.918073 | orchestrator | 14:52:11.918 STDOUT terraform:  + created = (known after apply) 2025-07-12 14:52:11.918107 | orchestrator | 14:52:11.918 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-12 14:52:11.918134 | orchestrator | 14:52:11.918 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-12 14:52:11.918157 | orchestrator | 14:52:11.918 STDOUT terraform:  + force_delete = false 2025-07-12 14:52:11.918191 | orchestrator | 14:52:11.918 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-12 14:52:11.918225 | orchestrator | 14:52:11.918 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.918260 | orchestrator | 14:52:11.918 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 14:52:11.918294 | orchestrator | 14:52:11.918 STDOUT terraform:  + image_name = (known after apply) 2025-07-12 14:52:11.918318 | orchestrator | 14:52:11.918 STDOUT terraform:  + key_pair = "testbed" 2025-07-12 14:52:11.918350 | orchestrator | 14:52:11.918 STDOUT terraform:  + name = "testbed-node-3" 2025-07-12 14:52:11.918371 | orchestrator | 14:52:11.918 STDOUT terraform:  + power_state = "active" 2025-07-12 14:52:11.918405 | orchestrator | 14:52:11.918 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.918439 | orchestrator | 14:52:11.918 STDOUT terraform:  + security_groups = (known after apply) 2025-07-12 14:52:11.918459 | orchestrator | 14:52:11.918 STDOUT terraform:  + stop_before_destroy = false 2025-07-12 14:52:11.918493 | orchestrator | 14:52:11.918 STDOUT terraform:  + updated = (known after apply) 2025-07-12 14:52:11.918547 | orchestrator | 14:52:11.918 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-12 14:52:11.918563 | orchestrator | 14:52:11.918 STDOUT terraform:  + block_device { 2025-07-12 14:52:11.918587 | orchestrator | 14:52:11.918 STDOUT terraform:  + boot_index = 0 2025-07-12 14:52:11.918614 | orchestrator | 14:52:11.918 STDOUT terraform:  + delete_on_termination = false 2025-07-12 14:52:11.918642 | orchestrator | 14:52:11.918 STDOUT terraform:  + destination_type = "volume" 2025-07-12 14:52:11.918669 | orchestrator | 14:52:11.918 STDOUT terraform:  + multiattach = false 2025-07-12 14:52:11.918698 | orchestrator | 14:52:11.918 STDOUT terraform:  + source_type = "volume" 2025-07-12 14:52:11.918735 | orchestrator | 14:52:11.918 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 14:52:11.918741 | orchestrator | 14:52:11.918 STDOUT terraform:  } 2025-07-12 14:52:11.918758 | orchestrator | 14:52:11.918 STDOUT terraform:  + network { 2025-07-12 14:52:11.918777 | orchestrator | 14:52:11.918 STDOUT terraform:  + access_network = false 2025-07-12 14:52:11.918841 | orchestrator | 14:52:11.918 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-12 14:52:11.918848 | orchestrator | 14:52:11.918 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-12 14:52:11.918874 | orchestrator | 14:52:11.918 STDOUT terraform:  + mac = (known after apply) 2025-07-12 14:52:11.918905 | orchestrator | 14:52:11.918 STDOUT terraform:  + name = (known after apply) 2025-07-12 14:52:11.918935 | orchestrator | 14:52:11.918 STDOUT terraform:  + port = (known after apply) 2025-07-12 14:52:11.918966 | orchestrator | 14:52:11.918 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 14:52:11.918973 | orchestrator | 14:52:11.918 STDOUT terraform:  } 2025-07-12 14:52:11.918990 | orchestrator | 14:52:11.918 STDOUT terraform:  } 2025-07-12 14:52:11.919031 | orchestrator | 14:52:11.918 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-07-12 14:52:11.919071 | orchestrator | 14:52:11.919 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-12 14:52:11.919105 | orchestrator | 14:52:11.919 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-12 14:52:11.919137 | orchestrator | 14:52:11.919 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-12 14:52:11.919171 | orchestrator | 14:52:11.919 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-12 14:52:11.919204 | orchestrator | 14:52:11.919 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 14:52:11.919226 | orchestrator | 14:52:11.919 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 14:52:11.919242 | orchestrator | 14:52:11.919 STDOUT terraform:  + config_drive = true 2025-07-12 14:52:11.919281 | orchestrator | 14:52:11.919 STDOUT terraform:  + created = (known after apply) 2025-07-12 14:52:11.919309 | orchestrator | 14:52:11.919 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-12 14:52:11.919337 | orchestrator | 14:52:11.919 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-12 14:52:11.919360 | orchestrator | 14:52:11.919 STDOUT terraform:  + force_delete = false 2025-07-12 14:52:11.919392 | orchestrator | 14:52:11.919 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-12 14:52:11.919425 | orchestrator | 14:52:11.919 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.919458 | orchestrator | 14:52:11.919 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 14:52:11.919495 | orchestrator | 14:52:11.919 STDOUT terraform:  + image_name = (known after apply) 2025-07-12 14:52:11.919516 | orchestrator | 14:52:11.919 STDOUT terraform:  + key_pair = "testbed" 2025-07-12 14:52:11.919546 | orchestrator | 14:52:11.919 STDOUT terraform:  + name = "testbed-node-4" 2025-07-12 14:52:11.919569 | orchestrator | 14:52:11.919 STDOUT terraform:  + power_state = "active" 2025-07-12 14:52:11.919603 | orchestrator | 14:52:11.919 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.919637 | orchestrator | 14:52:11.919 STDOUT terraform:  + security_groups = (known after apply) 2025-07-12 14:52:11.919661 | orchestrator | 14:52:11.919 STDOUT terraform:  + stop_before_destroy = false 2025-07-12 14:52:11.919694 | orchestrator | 14:52:11.919 STDOUT terraform:  + updated = (known after apply) 2025-07-12 14:52:11.919741 | orchestrator | 14:52:11.919 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-12 14:52:11.919758 | orchestrator | 14:52:11.919 STDOUT terraform:  + block_device { 2025-07-12 14:52:11.919781 | orchestrator | 14:52:11.919 STDOUT terraform:  + boot_index = 0 2025-07-12 14:52:11.919827 | orchestrator | 14:52:11.919 STDOUT terraform:  + delete_on_termination = false 2025-07-12 14:52:11.919844 | orchestrator | 14:52:11.919 STDOUT terraform:  + destination_type = "volume" 2025-07-12 14:52:11.919872 | orchestrator | 14:52:11.919 STDOUT terraform:  + multiattach = false 2025-07-12 14:52:11.919901 | orchestrator | 14:52:11.919 STDOUT terraform:  + source_type = "volume" 2025-07-12 14:52:11.919939 | orchestrator | 14:52:11.919 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 14:52:11.919945 | orchestrator | 14:52:11.919 STDOUT terraform:  } 2025-07-12 14:52:11.919962 | orchestrator | 14:52:11.919 STDOUT terraform:  + network { 2025-07-12 14:52:11.919982 | orchestrator | 14:52:11.919 STDOUT terraform:  + access_network = false 2025-07-12 14:52:11.920014 | orchestrator | 14:52:11.919 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-12 14:52:11.920042 | orchestrator | 14:52:11.920 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-12 14:52:11.920071 | orchestrator | 14:52:11.920 STDOUT terraform:  + mac = (known after apply) 2025-07-12 14:52:11.920101 | orchestrator | 14:52:11.920 STDOUT terraform:  + name = (known after apply) 2025-07-12 14:52:11.920131 | orchestrator | 14:52:11.920 STDOUT terraform:  + port = (known after apply) 2025-07-12 14:52:11.920161 | orchestrator | 14:52:11.920 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 14:52:11.920167 | orchestrator | 14:52:11.920 STDOUT terraform:  } 2025-07-12 14:52:11.920183 | orchestrator | 14:52:11.920 STDOUT terraform:  } 2025-07-12 14:52:11.920225 | orchestrator | 14:52:11.920 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-07-12 14:52:11.920266 | orchestrator | 14:52:11.920 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-12 14:52:11.920299 | orchestrator | 14:52:11.920 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-12 14:52:11.920334 | orchestrator | 14:52:11.920 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-12 14:52:11.920370 | orchestrator | 14:52:11.920 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-12 14:52:11.920403 | orchestrator | 14:52:11.920 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 14:52:11.920427 | orchestrator | 14:52:11.920 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 14:52:11.920448 | orchestrator | 14:52:11.920 STDOUT terraform:  + config_drive = true 2025-07-12 14:52:11.920482 | orchestrator | 14:52:11.920 STDOUT terraform:  + created = (known after apply) 2025-07-12 14:52:11.920515 | orchestrator | 14:52:11.920 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-12 14:52:11.920543 | orchestrator | 14:52:11.920 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-12 14:52:11.920565 | orchestrator | 14:52:11.920 STDOUT terraform:  + force_delete = false 2025-07-12 14:52:11.920598 | orchestrator | 14:52:11.920 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-12 14:52:11.920633 | orchestrator | 14:52:11.920 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.920667 | orchestrator | 14:52:11.920 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 14:52:11.920700 | orchestrator | 14:52:11.920 STDOUT terraform:  + image_name = (known after apply) 2025-07-12 14:52:11.920725 | orchestrator | 14:52:11.920 STDOUT terraform:  + key_pair = "testbed" 2025-07-12 14:52:11.920753 | orchestrator | 14:52:11.920 STDOUT terraform:  + name = "testbed-node-5" 2025-07-12 14:52:11.920777 | orchestrator | 14:52:11.920 STDOUT terraform:  + power_state = "active" 2025-07-12 14:52:11.920826 | orchestrator | 14:52:11.920 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.920852 | orchestrator | 14:52:11.920 STDOUT terraform:  + security_groups = (known after apply) 2025-07-12 14:52:11.920875 | orchestrator | 14:52:11.920 STDOUT terraform:  + stop_before_destroy = false 2025-07-12 14:52:11.920908 | orchestrator | 14:52:11.920 STDOUT terraform:  + updated = (known after apply) 2025-07-12 14:52:11.920955 | orchestrator | 14:52:11.920 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-12 14:52:11.920970 | orchestrator | 14:52:11.920 STDOUT terraform:  + block_device { 2025-07-12 14:52:11.920996 | orchestrator | 14:52:11.920 STDOUT terraform:  + boot_index = 0 2025-07-12 14:52:11.921035 | orchestrator | 14:52:11.920 STDOUT terraform:  + delete_on_termination = false 2025-07-12 14:52:11.921056 | orchestrator | 14:52:11.921 STDOUT terraform:  + destination_type = "volume" 2025-07-12 14:52:11.921081 | orchestrator | 14:52:11.921 STDOUT terraform:  + multiattach = false 2025-07-12 14:52:11.922221 | orchestrator | 14:52:11.921 STDOUT terraform:  + source_type = "volume" 2025-07-12 14:52:11.922247 | orchestrator | 14:52:11.921 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 14:52:11.922252 | orchestrator | 14:52:11.921 STDOUT terraform:  } 2025-07-12 14:52:11.922256 | orchestrator | 14:52:11.921 STDOUT terraform:  + network { 2025-07-12 14:52:11.922260 | orchestrator | 14:52:11.921 STDOUT terraform:  + access_network = false 2025-07-12 14:52:11.922264 | orchestrator | 14:52:11.921 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-12 14:52:11.922267 | orchestrator | 14:52:11.921 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-12 14:52:11.922271 | orchestrator | 14:52:11.921 STDOUT terraform:  + mac = (known after apply) 2025-07-12 14:52:11.922275 | orchestrator | 14:52:11.921 STDOUT terraform:  + name = (known after apply) 2025-07-12 14:52:11.922284 | orchestrator | 14:52:11.921 STDOUT terraform:  + port = (known after apply) 2025-07-12 14:52:11.922288 | orchestrator | 14:52:11.921 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 14:52:11.922291 | orchestrator | 14:52:11.921 STDOUT terraform:  } 2025-07-12 14:52:11.922295 | orchestrator | 14:52:11.921 STDOUT terraform:  } 2025-07-12 14:52:11.922299 | orchestrator | 14:52:11.921 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-07-12 14:52:11.922303 | orchestrator | 14:52:11.921 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-07-12 14:52:11.922307 | orchestrator | 14:52:11.921 STDOUT terraform:  + fingerprint = (known after apply) 2025-07-12 14:52:11.922310 | orchestrator | 14:52:11.921 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.922314 | orchestrator | 14:52:11.921 STDOUT terraform:  + name = "testbed" 2025-07-12 14:52:11.922318 | orchestrator | 14:52:11.921 STDOUT terraform:  + private_key = (sensitive value) 2025-07-12 14:52:11.922322 | orchestrator | 14:52:11.921 STDOUT terraform:  + public_key = (known after apply) 2025-07-12 14:52:11.922326 | orchestrator | 14:52:11.921 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.922333 | orchestrator | 14:52:11.921 STDOUT terraform:  + user_id = (known after apply) 2025-07-12 14:52:11.922348 | orchestrator | 14:52:11.921 STDOUT terraform:  } 2025-07-12 14:52:11.922352 | orchestrator | 14:52:11.921 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-07-12 14:52:11.922357 | orchestrator | 14:52:11.921 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 14:52:11.922361 | orchestrator | 14:52:11.921 STDOUT terraform:  + device = (known after apply) 2025-07-12 14:52:11.922364 | orchestrator | 14:52:11.921 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.922375 | orchestrator | 14:52:11.921 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 14:52:11.922379 | orchestrator | 14:52:11.921 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.922383 | orchestrator | 14:52:11.921 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 14:52:11.922387 | orchestrator | 14:52:11.921 STDOUT terraform:  } 2025-07-12 14:52:11.922391 | orchestrator | 14:52:11.921 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-07-12 14:52:11.922394 | orchestrator | 14:52:11.921 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 14:52:11.922398 | orchestrator | 14:52:11.921 STDOUT terraform:  + device = (known after apply) 2025-07-12 14:52:11.922402 | orchestrator | 14:52:11.921 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.922406 | orchestrator | 14:52:11.921 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 14:52:11.922410 | orchestrator | 14:52:11.921 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.922413 | orchestrator | 14:52:11.921 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 14:52:11.922426 | orchestrator | 14:52:11.921 STDOUT terraform:  } 2025-07-12 14:52:11.922430 | orchestrator | 14:52:11.922 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-07-12 14:52:11.922434 | orchestrator | 14:52:11.922 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 14:52:11.922438 | orchestrator | 14:52:11.922 STDOUT terraform:  + device = (known after apply) 2025-07-12 14:52:11.922442 | orchestrator | 14:52:11.922 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.922445 | orchestrator | 14:52:11.922 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 14:52:11.922449 | orchestrator | 14:52:11.922 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.922453 | orchestrator | 14:52:11.922 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 14:52:11.922457 | orchestrator | 14:52:11.922 STDOUT terraform:  } 2025-07-12 14:52:11.922460 | orchestrator | 14:52:11.922 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-07-12 14:52:11.922464 | orchestrator | 14:52:11.922 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 14:52:11.922468 | orchestrator | 14:52:11.922 STDOUT terraform:  + device = (known after apply) 2025-07-12 14:52:11.922472 | orchestrator | 14:52:11.922 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.922475 | orchestrator | 14:52:11.922 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 14:52:11.922479 | orchestrator | 14:52:11.922 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.922484 | orchestrator | 14:52:11.922 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 14:52:11.922488 | orchestrator | 14:52:11.922 STDOUT terraform:  } 2025-07-12 14:52:11.922492 | orchestrator | 14:52:11.922 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-07-12 14:52:11.922555 | orchestrator | 14:52:11.922 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 14:52:11.922561 | orchestrator | 14:52:11.922 STDOUT terraform:  + device = (known after apply) 2025-07-12 14:52:11.926070 | orchestrator | 14:52:11.922 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.926098 | orchestrator | 14:52:11.922 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 14:52:11.926102 | orchestrator | 14:52:11.922 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.926106 | orchestrator | 14:52:11.922 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 14:52:11.926111 | orchestrator | 14:52:11.922 STDOUT terraform:  } 2025-07-12 14:52:11.926115 | orchestrator | 14:52:11.922 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-07-12 14:52:11.926120 | orchestrator | 14:52:11.922 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 14:52:11.926124 | orchestrator | 14:52:11.922 STDOUT terraform:  + device = (known after apply) 2025-07-12 14:52:11.926127 | orchestrator | 14:52:11.922 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.926131 | orchestrator | 14:52:11.922 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 14:52:11.926135 | orchestrator | 14:52:11.922 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.926139 | orchestrator | 14:52:11.922 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 14:52:11.926143 | orchestrator | 14:52:11.922 STDOUT terraform:  } 2025-07-12 14:52:11.926146 | orchestrator | 14:52:11.922 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-07-12 14:52:11.926150 | orchestrator | 14:52:11.922 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 14:52:11.926154 | orchestrator | 14:52:11.922 STDOUT terraform:  + device = (known after apply) 2025-07-12 14:52:11.926158 | orchestrator | 14:52:11.922 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.926161 | orchestrator | 14:52:11.923 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 14:52:11.926165 | orchestrator | 14:52:11.923 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.926169 | orchestrator | 14:52:11.923 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 14:52:11.926173 | orchestrator | 14:52:11.923 STDOUT terraform:  } 2025-07-12 14:52:11.926177 | orchestrator | 14:52:11.923 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-07-12 14:52:11.926180 | orchestrator | 14:52:11.923 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 14:52:11.926184 | orchestrator | 14:52:11.923 STDOUT terraform:  + device = (known after apply) 2025-07-12 14:52:11.926188 | orchestrator | 14:52:11.923 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.926192 | orchestrator | 14:52:11.923 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 14:52:11.926195 | orchestrator | 14:52:11.923 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.926208 | orchestrator | 14:52:11.923 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 14:52:11.926212 | orchestrator | 14:52:11.923 STDOUT terraform:  } 2025-07-12 14:52:11.926215 | orchestrator | 14:52:11.923 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-07-12 14:52:11.926219 | orchestrator | 14:52:11.923 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 14:52:11.926228 | orchestrator | 14:52:11.923 STDOUT terraform:  + device = (known after apply) 2025-07-12 14:52:11.926232 | orchestrator | 14:52:11.923 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.926236 | orchestrator | 14:52:11.923 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 14:52:11.926240 | orchestrator | 14:52:11.923 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.926244 | orchestrator | 14:52:11.923 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 14:52:11.926248 | orchestrator | 14:52:11.923 STDOUT terraform:  } 2025-07-12 14:52:11.926261 | orchestrator | 14:52:11.923 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-07-12 14:52:11.926266 | orchestrator | 14:52:11.923 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-07-12 14:52:11.926270 | orchestrator | 14:52:11.923 STDOUT terraform:  + fixed_ip = (known after apply) 2025-07-12 14:52:11.926274 | orchestrator | 14:52:11.923 STDOUT terraform:  + floating_ip = (known after apply) 2025-07-12 14:52:11.926278 | orchestrator | 14:52:11.923 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.926281 | orchestrator | 14:52:11.923 STDOUT terraform:  + port_id = (known after apply) 2025-07-12 14:52:11.926285 | orchestrator | 14:52:11.923 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.926289 | orchestrator | 14:52:11.923 STDOUT terraform:  } 2025-07-12 14:52:11.926293 | orchestrator | 14:52:11.923 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-07-12 14:52:11.926297 | orchestrator | 14:52:11.923 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-07-12 14:52:11.926301 | orchestrator | 14:52:11.923 STDOUT terraform:  + address = (known after apply) 2025-07-12 14:52:11.926305 | orchestrator | 14:52:11.923 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 14:52:11.926308 | orchestrator | 14:52:11.923 STDOUT terraform:  + dns_domain = (known after apply) 2025-07-12 14:52:11.926312 | orchestrator | 14:52:11.923 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 14:52:11.926316 | orchestrator | 14:52:11.923 STDOUT terraform:  + fixed_ip = (known after apply) 2025-07-12 14:52:11.926320 | orchestrator | 14:52:11.923 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.926324 | orchestrator | 14:52:11.923 STDOUT terraform:  + pool = "public" 2025-07-12 14:52:11.926328 | orchestrator | 14:52:11.923 STDOUT terraform:  + port_id = (known after apply) 2025-07-12 14:52:11.926332 | orchestrator | 14:52:11.923 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.926335 | orchestrator | 14:52:11.923 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 14:52:11.926342 | orchestrator | 14:52:11.924 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 14:52:11.926346 | orchestrator | 14:52:11.924 STDOUT terraform:  } 2025-07-12 14:52:11.926350 | orchestrator | 14:52:11.924 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-07-12 14:52:11.926354 | orchestrator | 14:52:11.924 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-07-12 14:52:11.926358 | orchestrator | 14:52:11.924 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 14:52:11.926361 | orchestrator | 14:52:11.924 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 14:52:11.926365 | orchestrator | 14:52:11.924 STDOUT terraform:  + availability_zone_hints = [ 2025-07-12 14:52:11.926369 | orchestrator | 14:52:11.924 STDOUT terraform:  + "nova", 2025-07-12 14:52:11.926372 | orchestrator | 14:52:11.924 STDOUT terraform:  ] 2025-07-12 14:52:11.926376 | orchestrator | 14:52:11.924 STDOUT terraform:  + dns_domain = (known after apply) 2025-07-12 14:52:11.926380 | orchestrator | 14:52:11.924 STDOUT terraform:  + external = (known after apply) 2025-07-12 14:52:11.926384 | orchestrator | 14:52:11.924 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.926387 | orchestrator | 14:52:11.924 STDOUT terraform:  + mtu = (known after apply) 2025-07-12 14:52:11.926391 | orchestrator | 14:52:11.924 STDOUT terraform:  + name = "net-testbed-management" 2025-07-12 14:52:11.926395 | orchestrator | 14:52:11.924 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 14:52:11.926399 | orchestrator | 14:52:11.924 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-12 14:52:11.926402 | orchestrator | 14:52:11.924 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.926411 | orchestrator | 14:52:11.924 STDOUT terraform:  + shared = (known after apply) 2025-07-12 14:52:11.926415 | orchestrator | 14:52:11.924 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 14:52:11.926419 | orchestrator | 14:52:11.924 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-07-12 14:52:11.926422 | orchestrator | 14:52:11.924 STDOUT terraform:  + segments (known after apply) 2025-07-12 14:52:11.926426 | orchestrator | 14:52:11.924 STDOUT terraform:  } 2025-07-12 14:52:11.926430 | orchestrator | 14:52:11.924 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-07-12 14:52:11.926434 | orchestrator | 14:52:11.924 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-07-12 14:52:11.926438 | orchestrator | 14:52:11.924 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 14:52:11.926442 | orchestrator | 14:52:11.924 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-12 14:52:11.926445 | orchestrator | 14:52:11.924 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-12 14:52:11.926449 | orchestrator | 14:52:11.924 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 14:52:11.926453 | orchestrator | 14:52:11.924 STDOUT terraform:  + device_id = (known after apply) 2025-07-12 14:52:11.926460 | orchestrator | 14:52:11.924 STDOUT terraform:  + device_owner = (known after apply) 2025-07-12 14:52:11.926464 | orchestrator | 14:52:11.924 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-12 14:52:11.926467 | orchestrator | 14:52:11.924 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 14:52:11.926471 | orchestrator | 14:52:11.924 STDOUT terraform:  + id 2025-07-12 14:52:11.926475 | orchestrator | 14:52:11.925 STDOUT terraform:  = (known after apply) 2025-07-12 14:52:11.926479 | orchestrator | 14:52:11.925 STDOUT terraform:  + mac_address = (known after apply) 2025-07-12 14:52:11.926483 | orchestrator | 14:52:11.925 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 14:52:11.926486 | orchestrator | 14:52:11.925 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 14:52:11.926490 | orchestrator | 14:52:11.925 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-12 14:52:11.926494 | orchestrator | 14:52:11.925 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.926498 | orchestrator | 14:52:11.925 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-12 14:52:11.926501 | orchestrator | 14:52:11.925 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 14:52:11.926505 | orchestrator | 14:52:11.925 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 14:52:11.926509 | orchestrator | 14:52:11.926 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-12 14:52:11.926513 | orchestrator | 14:52:11.926 STDOUT terraform:  } 2025-07-12 14:52:11.926517 | orchestrator | 14:52:11.926 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 14:52:11.926520 | orchestrator | 14:52:11.926 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-12 14:52:11.926524 | orchestrator | 14:52:11.926 STDOUT terraform:  } 2025-07-12 14:52:11.926528 | orchestrator | 14:52:11.926 STDOUT terraform:  + binding (known after apply) 2025-07-12 14:52:11.926532 | orchestrator | 14:52:11.926 STDOUT terraform:  + fixed_ip { 2025-07-12 14:52:11.926536 | orchestrator | 14:52:11.926 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-07-12 14:52:11.926539 | orchestrator | 14:52:11.926 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 14:52:11.926543 | orchestrator | 14:52:11.926 STDOUT terraform:  } 2025-07-12 14:52:11.926547 | orchestrator | 14:52:11.926 STDOUT terraform:  } 2025-07-12 14:52:11.926551 | orchestrator | 14:52:11.926 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-07-12 14:52:11.926555 | orchestrator | 14:52:11.926 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-12 14:52:11.926561 | orchestrator | 14:52:11.926 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 14:52:11.926564 | orchestrator | 14:52:11.926 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-12 14:52:11.926568 | orchestrator | 14:52:11.926 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-12 14:52:11.926572 | orchestrator | 14:52:11.926 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 14:52:11.926579 | orchestrator | 14:52:11.926 STDOUT terraform:  + device_id = (known after apply) 2025-07-12 14:52:11.926583 | orchestrator | 14:52:11.926 STDOUT terraform:  + device_owner = (known after apply) 2025-07-12 14:52:11.926587 | orchestrator | 14:52:11.926 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-12 14:52:11.926590 | orchestrator | 14:52:11.926 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 14:52:11.926594 | orchestrator | 14:52:11.926 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.926605 | orchestrator | 14:52:11.926 STDOUT terraform:  + mac_address = (known after apply) 2025-07-12 14:52:11.926625 | orchestrator | 14:52:11.926 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 14:52:11.926659 | orchestrator | 14:52:11.926 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 14:52:11.926692 | orchestrator | 14:52:11.926 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-12 14:52:11.926728 | orchestrator | 14:52:11.926 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.926761 | orchestrator | 14:52:11.926 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-12 14:52:11.926796 | orchestrator | 14:52:11.926 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 14:52:11.926840 | orchestrator | 14:52:11.926 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 14:52:11.926848 | orchestrator | 14:52:11.926 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-12 14:52:11.926864 | orchestrator | 14:52:11.926 STDOUT terraform:  } 2025-07-12 14:52:11.926883 | orchestrator | 14:52:11.926 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 14:52:11.926919 | orchestrator | 14:52:11.926 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-12 14:52:11.926928 | orchestrator | 14:52:11.926 STDOUT terraform:  } 2025-07-12 14:52:11.926964 | orchestrator | 14:52:11.926 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 14:52:11.926990 | orchestrator | 14:52:11.926 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-12 14:52:11.927003 | orchestrator | 14:52:11.926 STDOUT terraform:  } 2025-07-12 14:52:11.927024 | orchestrator | 14:52:11.927 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 14:52:11.927054 | orchestrator | 14:52:11.927 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-12 14:52:11.927060 | orchestrator | 14:52:11.927 STDOUT terraform:  } 2025-07-12 14:52:11.927084 | orchestrator | 14:52:11.927 STDOUT terraform:  + binding (known after apply) 2025-07-12 14:52:11.927090 | orchestrator | 14:52:11.927 STDOUT terraform:  + fixed_ip { 2025-07-12 14:52:11.927115 | orchestrator | 14:52:11.927 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-07-12 14:52:11.927145 | orchestrator | 14:52:11.927 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 14:52:11.927152 | orchestrator | 14:52:11.927 STDOUT terraform:  } 2025-07-12 14:52:11.927171 | orchestrator | 14:52:11.927 STDOUT terraform:  } 2025-07-12 14:52:11.927213 | orchestrator | 14:52:11.927 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-07-12 14:52:11.927253 | orchestrator | 14:52:11.927 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-12 14:52:11.927293 | orchestrator | 14:52:11.927 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 14:52:11.927327 | orchestrator | 14:52:11.927 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-12 14:52:11.927364 | orchestrator | 14:52:11.927 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-12 14:52:11.927396 | orchestrator | 14:52:11.927 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 14:52:11.927431 | orchestrator | 14:52:11.927 STDOUT terraform:  + device_id = (known after apply) 2025-07-12 14:52:11.927465 | orchestrator | 14:52:11.927 STDOUT terraform:  + device_owner = (known after apply) 2025-07-12 14:52:11.927499 | orchestrator | 14:52:11.927 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-12 14:52:11.927534 | orchestrator | 14:52:11.927 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 14:52:11.927570 | orchestrator | 14:52:11.927 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.927605 | orchestrator | 14:52:11.927 STDOUT terraform:  + mac_address = (known after apply) 2025-07-12 14:52:11.927640 | orchestrator | 14:52:11.927 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 14:52:11.927673 | orchestrator | 14:52:11.927 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 14:52:11.927707 | orchestrator | 14:52:11.927 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-12 14:52:11.927741 | orchestrator | 14:52:11.927 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.927775 | orchestrator | 14:52:11.927 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-12 14:52:11.927820 | orchestrator | 14:52:11.927 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 14:52:11.927843 | orchestrator | 14:52:11.927 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 14:52:11.927865 | orchestrator | 14:52:11.927 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-12 14:52:11.927872 | orchestrator | 14:52:11.927 STDOUT terraform:  } 2025-07-12 14:52:11.927893 | orchestrator | 14:52:11.927 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 14:52:11.927920 | orchestrator | 14:52:11.927 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-12 14:52:11.927927 | orchestrator | 14:52:11.927 STDOUT terraform:  } 2025-07-12 14:52:11.927948 | orchestrator | 14:52:11.927 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 14:52:11.927974 | orchestrator | 14:52:11.927 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-12 14:52:11.927981 | orchestrator | 14:52:11.927 STDOUT terraform:  } 2025-07-12 14:52:11.928003 | orchestrator | 14:52:11.927 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 14:52:11.928030 | orchestrator | 14:52:11.927 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-12 14:52:11.928036 | orchestrator | 14:52:11.928 STDOUT terraform:  } 2025-07-12 14:52:11.928060 | orchestrator | 14:52:11.928 STDOUT terraform:  + binding (known after apply) 2025-07-12 14:52:11.928070 | orchestrator | 14:52:11.928 STDOUT terraform:  + fixed_ip { 2025-07-12 14:52:11.928095 | orchestrator | 14:52:11.928 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-07-12 14:52:11.928123 | orchestrator | 14:52:11.928 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 14:52:11.928130 | orchestrator | 14:52:11.928 STDOUT terraform:  } 2025-07-12 14:52:11.928150 | orchestrator | 14:52:11.928 STDOUT terraform:  } 2025-07-12 14:52:11.928191 | orchestrator | 14:52:11.928 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-07-12 14:52:11.928234 | orchestrator | 14:52:11.928 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-12 14:52:11.928267 | orchestrator | 14:52:11.928 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 14:52:11.928301 | orchestrator | 14:52:11.928 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-12 14:52:11.928335 | orchestrator | 14:52:11.928 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-12 14:52:11.928369 | orchestrator | 14:52:11.928 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 14:52:11.928403 | orchestrator | 14:52:11.928 STDOUT terraform:  + device_id = (known after apply) 2025-07-12 14:52:11.928436 | orchestrator | 14:52:11.928 STDOUT terraform:  + device_owner = (known after apply) 2025-07-12 14:52:11.928471 | orchestrator | 14:52:11.928 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-12 14:52:11.928506 | orchestrator | 14:52:11.928 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 14:52:11.928540 | orchestrator | 14:52:11.928 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.928574 | orchestrator | 14:52:11.928 STDOUT terraform:  + mac_address = (known after apply) 2025-07-12 14:52:11.928608 | orchestrator | 14:52:11.928 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 14:52:11.928641 | orchestrator | 14:52:11.928 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 14:52:11.928675 | orchestrator | 14:52:11.928 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-12 14:52:11.928709 | orchestrator | 14:52:11.928 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.928743 | orchestrator | 14:52:11.928 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-12 14:52:11.928778 | orchestrator | 14:52:11.928 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 14:52:11.928795 | orchestrator | 14:52:11.928 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 14:52:11.928895 | orchestrator | 14:52:11.928 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-12 14:52:11.928902 | orchestrator | 14:52:11.928 STDOUT terraform:  } 2025-07-12 14:52:11.928925 | orchestrator | 14:52:11.928 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 14:52:11.928953 | orchestrator | 14:52:11.928 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-12 14:52:11.928959 | orchestrator | 14:52:11.928 STDOUT terraform:  } 2025-07-12 14:52:11.928983 | orchestrator | 14:52:11.928 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 14:52:11.929011 | orchestrator | 14:52:11.928 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-12 14:52:11.929017 | orchestrator | 14:52:11.929 STDOUT terraform:  } 2025-07-12 14:52:11.929040 | orchestrator | 14:52:11.929 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 14:52:11.929066 | orchestrator | 14:52:11.929 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-12 14:52:11.929073 | orchestrator | 14:52:11.929 STDOUT terraform:  } 2025-07-12 14:52:11.929099 | orchestrator | 14:52:11.929 STDOUT terraform:  + binding (known after apply) 2025-07-12 14:52:11.929105 | orchestrator | 14:52:11.929 STDOUT terraform:  + fixed_ip { 2025-07-12 14:52:11.929132 | orchestrator | 14:52:11.929 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-07-12 14:52:11.929159 | orchestrator | 14:52:11.929 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 14:52:11.929166 | orchestrator | 14:52:11.929 STDOUT terraform:  } 2025-07-12 14:52:11.929171 | orchestrator | 14:52:11.929 STDOUT terraform:  } 2025-07-12 14:52:11.929219 | orchestrator | 14:52:11.929 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-07-12 14:52:11.929261 | orchestrator | 14:52:11.929 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-12 14:52:11.929296 | orchestrator | 14:52:11.929 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 14:52:11.929331 | orchestrator | 14:52:11.929 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-12 14:52:11.929363 | orchestrator | 14:52:11.929 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-12 14:52:11.929398 | orchestrator | 14:52:11.929 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 14:52:11.929433 | orchestrator | 14:52:11.929 STDOUT terraform:  + device_id = (known after apply) 2025-07-12 14:52:11.929467 | orchestrator | 14:52:11.929 STDOUT terraform:  + device_owner = (known after apply) 2025-07-12 14:52:11.929500 | orchestrator | 14:52:11.929 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-12 14:52:11.929538 | orchestrator | 14:52:11.929 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 14:52:11.929572 | orchestrator | 14:52:11.929 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.929603 | orchestrator | 14:52:11.929 STDOUT terraform:  + mac_address = (known after apply) 2025-07-12 14:52:11.929637 | orchestrator | 14:52:11.929 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 14:52:11.929671 | orchestrator | 14:52:11.929 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 14:52:11.929705 | orchestrator | 14:52:11.929 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-12 14:52:11.929741 | orchestrator | 14:52:11.929 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.929774 | orchestrator | 14:52:11.929 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-12 14:52:11.929829 | orchestrator | 14:52:11.929 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 14:52:11.929835 | orchestrator | 14:52:11.929 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 14:52:11.929861 | orchestrator | 14:52:11.929 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-12 14:52:11.929869 | orchestrator | 14:52:11.929 STDOUT terraform:  } 2025-07-12 14:52:11.929877 | orchestrator | 14:52:11.929 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 14:52:11.929910 | orchestrator | 14:52:11.929 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-12 14:52:11.929916 | orchestrator | 14:52:11.929 STDOUT terraform:  } 2025-07-12 14:52:11.929932 | orchestrator | 14:52:11.929 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 14:52:11.929958 | orchestrator | 14:52:11.929 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-12 14:52:11.929965 | orchestrator | 14:52:11.929 STDOUT terraform:  } 2025-07-12 14:52:11.929986 | orchestrator | 14:52:11.929 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 14:52:11.930026 | orchestrator | 14:52:11.929 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-12 14:52:11.930034 | orchestrator | 14:52:11.930 STDOUT terraform:  } 2025-07-12 14:52:11.930053 | orchestrator | 14:52:11.930 STDOUT terraform:  + binding (known after apply) 2025-07-12 14:52:11.930059 | orchestrator | 14:52:11.930 STDOUT terraform:  + fixed_ip { 2025-07-12 14:52:11.930087 | orchestrator | 14:52:11.930 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-07-12 14:52:11.930113 | orchestrator | 14:52:11.930 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 14:52:11.930119 | orchestrator | 14:52:11.930 STDOUT terraform:  } 2025-07-12 14:52:11.930135 | orchestrator | 14:52:11.930 STDOUT terraform:  } 2025-07-12 14:52:11.930179 | orchestrator | 14:52:11.930 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-07-12 14:52:11.930221 | orchestrator | 14:52:11.930 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-12 14:52:11.930256 | orchestrator | 14:52:11.930 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 14:52:11.930290 | orchestrator | 14:52:11.930 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-12 14:52:11.930323 | orchestrator | 14:52:11.930 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-12 14:52:11.930357 | orchestrator | 14:52:11.930 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 14:52:11.930391 | orchestrator | 14:52:11.930 STDOUT terraform:  + device_id = (known after apply) 2025-07-12 14:52:11.930427 | orchestrator | 14:52:11.930 STDOUT terraform:  + device_owner = (known after apply) 2025-07-12 14:52:11.930461 | orchestrator | 14:52:11.930 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-12 14:52:11.930496 | orchestrator | 14:52:11.930 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 14:52:11.930536 | orchestrator | 14:52:11.930 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.930586 | orchestrator | 14:52:11.930 STDOUT terraform:  + mac_address = (known after apply) 2025-07-12 14:52:11.930638 | orchestrator | 14:52:11.930 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 14:52:11.930687 | orchestrator | 14:52:11.930 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 14:52:11.930725 | orchestrator | 14:52:11.930 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-12 14:52:11.930759 | orchestrator | 14:52:11.930 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.930793 | orchestrator | 14:52:11.930 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-12 14:52:11.930850 | orchestrator | 14:52:11.930 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 14:52:11.930870 | orchestrator | 14:52:11.930 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 14:52:11.930897 | orchestrator | 14:52:11.930 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-12 14:52:11.930903 | orchestrator | 14:52:11.930 STDOUT terraform:  } 2025-07-12 14:52:11.930927 | orchestrator | 14:52:11.930 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 14:52:11.930955 | orchestrator | 14:52:11.930 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-12 14:52:11.930961 | orchestrator | 14:52:11.930 STDOUT terraform:  } 2025-07-12 14:52:11.930983 | orchestrator | 14:52:11.930 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 14:52:11.931011 | orchestrator | 14:52:11.930 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-12 14:52:11.931017 | orchestrator | 14:52:11.931 STDOUT terraform:  } 2025-07-12 14:52:11.931038 | orchestrator | 14:52:11.931 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 14:52:11.931065 | orchestrator | 14:52:11.931 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-12 14:52:11.931071 | orchestrator | 14:52:11.931 STDOUT terraform:  } 2025-07-12 14:52:11.931118 | orchestrator | 14:52:11.931 STDOUT terraform:  + binding (known after apply) 2025-07-12 14:52:11.931124 | orchestrator | 14:52:11.931 STDOUT terraform:  + fixed_ip { 2025-07-12 14:52:11.931133 | orchestrator | 14:52:11.931 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-07-12 14:52:11.931162 | orchestrator | 14:52:11.931 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 14:52:11.931168 | orchestrator | 14:52:11.931 STDOUT terraform:  } 2025-07-12 14:52:11.931187 | orchestrator | 14:52:11.931 STDOUT terraform:  } 2025-07-12 14:52:11.931232 | orchestrator | 14:52:11.931 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-07-12 14:52:11.931277 | orchestrator | 14:52:11.931 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-12 14:52:11.931313 | orchestrator | 14:52:11.931 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 14:52:11.931346 | orchestrator | 14:52:11.931 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-12 14:52:11.931379 | orchestrator | 14:52:11.931 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-12 14:52:11.931414 | orchestrator | 14:52:11.931 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 14:52:11.931448 | orchestrator | 14:52:11.931 STDOUT terraform:  + device_id = (known after apply) 2025-07-12 14:52:11.931483 | orchestrator | 14:52:11.931 STDOUT terraform:  + device_owner = (known after apply) 2025-07-12 14:52:11.931517 | orchestrator | 14:52:11.931 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-12 14:52:11.931551 | orchestrator | 14:52:11.931 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 14:52:11.931585 | orchestrator | 14:52:11.931 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.931620 | orchestrator | 14:52:11.931 STDOUT terraform:  + mac_address = (known after apply) 2025-07-12 14:52:11.931654 | orchestrator | 14:52:11.931 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 14:52:11.931687 | orchestrator | 14:52:11.931 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 14:52:11.931722 | orchestrator | 14:52:11.931 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-12 14:52:11.931763 | orchestrator | 14:52:11.931 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.931792 | orchestrator | 14:52:11.931 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-12 14:52:11.931839 | orchestrator | 14:52:11.931 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 14:52:11.931849 | orchestrator | 14:52:11.931 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 14:52:11.931878 | orchestrator | 14:52:11.931 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-12 14:52:11.931884 | orchestrator | 14:52:11.931 STDOUT terraform:  } 2025-07-12 14:52:11.931905 | orchestrator | 14:52:11.931 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 14:52:11.931933 | orchestrator | 14:52:11.931 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-12 14:52:11.931939 | orchestrator | 14:52:11.931 STDOUT terraform:  } 2025-07-12 14:52:11.931960 | orchestrator | 14:52:11.931 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 14:52:11.931987 | orchestrator | 14:52:11.931 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-12 14:52:11.931993 | orchestrator | 14:52:11.931 STDOUT terraform:  } 2025-07-12 14:52:11.932017 | orchestrator | 14:52:11.931 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 14:52:11.932043 | orchestrator | 14:52:11.932 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-12 14:52:11.932050 | orchestrator | 14:52:11.932 STDOUT terraform:  } 2025-07-12 14:52:11.932074 | orchestrator | 14:52:11.932 STDOUT terraform:  + binding (known after apply) 2025-07-12 14:52:11.932080 | orchestrator | 14:52:11.932 STDOUT terraform:  + fixed_ip { 2025-07-12 14:52:11.932108 | orchestrator | 14:52:11.932 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-07-12 14:52:11.932135 | orchestrator | 14:52:11.932 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 14:52:11.932141 | orchestrator | 14:52:11.932 STDOUT terraform:  } 2025-07-12 14:52:11.932158 | orchestrator | 14:52:11.932 STDOUT terraform:  } 2025-07-12 14:52:11.932204 | orchestrator | 14:52:11.932 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-07-12 14:52:11.932249 | orchestrator | 14:52:11.932 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-07-12 14:52:11.932267 | orchestrator | 14:52:11.932 STDOUT terraform:  + force_destroy = false 2025-07-12 14:52:11.932295 | orchestrator | 14:52:11.932 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.932321 | orchestrator | 14:52:11.932 STDOUT terraform:  + port_id = (known after apply) 2025-07-12 14:52:11.932349 | orchestrator | 14:52:11.932 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.932375 | orchestrator | 14:52:11.932 STDOUT terraform:  + router_id = (known after apply) 2025-07-12 14:52:11.932403 | orchestrator | 14:52:11.932 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 14:52:11.932409 | orchestrator | 14:52:11.932 STDOUT terraform:  } 2025-07-12 14:52:11.932446 | orchestrator | 14:52:11.932 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-07-12 14:52:11.932479 | orchestrator | 14:52:11.932 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-07-12 14:52:11.932514 | orchestrator | 14:52:11.932 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 14:52:11.932549 | orchestrator | 14:52:11.932 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 14:52:11.932571 | orchestrator | 14:52:11.932 STDOUT terraform:  + availability_zone_hints = [ 2025-07-12 14:52:11.932578 | orchestrator | 14:52:11.932 STDOUT terraform:  + "nova", 2025-07-12 14:52:11.932592 | orchestrator | 14:52:11.932 STDOUT terraform:  ] 2025-07-12 14:52:11.932626 | orchestrator | 14:52:11.932 STDOUT terraform:  + distributed = (known after apply) 2025-07-12 14:52:11.932661 | orchestrator | 14:52:11.932 STDOUT terraform:  + enable_snat = (known after apply) 2025-07-12 14:52:11.932707 | orchestrator | 14:52:11.932 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-07-12 14:52:11.932742 | orchestrator | 14:52:11.932 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-07-12 14:52:11.932782 | orchestrator | 14:52:11.932 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.932827 | orchestrator | 14:52:11.932 STDOUT terraform:  + name = "testbed" 2025-07-12 14:52:11.932851 | orchestrator | 14:52:11.932 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.932886 | orchestrator | 14:52:11.932 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 14:52:11.932912 | orchestrator | 14:52:11.932 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-07-12 14:52:11.932918 | orchestrator | 14:52:11.932 STDOUT terraform:  } 2025-07-12 14:52:11.932972 | orchestrator | 14:52:11.932 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-07-12 14:52:11.933023 | orchestrator | 14:52:11.932 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-07-12 14:52:11.933046 | orchestrator | 14:52:11.933 STDOUT terraform:  + description = "ssh" 2025-07-12 14:52:11.933074 | orchestrator | 14:52:11.933 STDOUT terraform:  + direction = "ingress" 2025-07-12 14:52:11.933098 | orchestrator | 14:52:11.933 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 14:52:11.933132 | orchestrator | 14:52:11.933 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.933156 | orchestrator | 14:52:11.933 STDOUT terraform:  + port_range_max = 22 2025-07-12 14:52:11.933179 | orchestrator | 14:52:11.933 STDOUT terraform:  + port_range_min = 22 2025-07-12 14:52:11.933202 | orchestrator | 14:52:11.933 STDOUT terraform:  + protocol = "tcp" 2025-07-12 14:52:11.933237 | orchestrator | 14:52:11.933 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.933270 | orchestrator | 14:52:11.933 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 14:52:11.933304 | orchestrator | 14:52:11.933 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 14:52:11.933333 | orchestrator | 14:52:11.933 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-12 14:52:11.933368 | orchestrator | 14:52:11.933 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 14:52:11.933403 | orchestrator | 14:52:11.933 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 14:52:11.933409 | orchestrator | 14:52:11.933 STDOUT terraform:  } 2025-07-12 14:52:11.933462 | orchestrator | 14:52:11.933 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-07-12 14:52:11.933512 | orchestrator | 14:52:11.933 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-07-12 14:52:11.933540 | orchestrator | 14:52:11.933 STDOUT terraform:  + description = "wireguard" 2025-07-12 14:52:11.933567 | orchestrator | 14:52:11.933 STDOUT terraform:  + direction = "ingress" 2025-07-12 14:52:11.933591 | orchestrator | 14:52:11.933 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 14:52:11.933627 | orchestrator | 14:52:11.933 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.933650 | orchestrator | 14:52:11.933 STDOUT terraform:  + port_range_max = 51820 2025-07-12 14:52:11.933673 | orchestrator | 14:52:11.933 STDOUT terraform:  + port_range_min = 51820 2025-07-12 14:52:11.933696 | orchestrator | 14:52:11.933 STDOUT terraform:  + protocol = "udp" 2025-07-12 14:52:11.933730 | orchestrator | 14:52:11.933 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.933764 | orchestrator | 14:52:11.933 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 14:52:11.933799 | orchestrator | 14:52:11.933 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 14:52:11.933846 | orchestrator | 14:52:11.933 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-12 14:52:11.933887 | orchestrator | 14:52:11.933 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 14:52:11.933916 | orchestrator | 14:52:11.933 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 14:52:11.933922 | orchestrator | 14:52:11.933 STDOUT terraform:  } 2025-07-12 14:52:11.933975 | orchestrator | 14:52:11.933 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-07-12 14:52:11.934039 | orchestrator | 14:52:11.933 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-07-12 14:52:11.934067 | orchestrator | 14:52:11.934 STDOUT terraform:  + direction = "ingress" 2025-07-12 14:52:11.934090 | orchestrator | 14:52:11.934 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 14:52:11.934125 | orchestrator | 14:52:11.934 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.934153 | orchestrator | 14:52:11.934 STDOUT terraform:  + protocol = "tcp" 2025-07-12 14:52:11.934188 | orchestrator | 14:52:11.934 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.934221 | orchestrator | 14:52:11.934 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 14:52:11.934256 | orchestrator | 14:52:11.934 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 14:52:11.934289 | orchestrator | 14:52:11.934 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-07-12 14:52:11.934324 | orchestrator | 14:52:11.934 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 14:52:11.934359 | orchestrator | 14:52:11.934 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 14:52:11.934365 | orchestrator | 14:52:11.934 STDOUT terraform:  } 2025-07-12 14:52:11.934417 | orchestrator | 14:52:11.934 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-07-12 14:52:11.934467 | orchestrator | 14:52:11.934 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-07-12 14:52:11.934495 | orchestrator | 14:52:11.934 STDOUT terraform:  + direction = "ingress" 2025-07-12 14:52:11.934518 | orchestrator | 14:52:11.934 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 14:52:11.934554 | orchestrator | 14:52:11.934 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.934578 | orchestrator | 14:52:11.934 STDOUT terraform:  + protocol = "udp" 2025-07-12 14:52:11.934612 | orchestrator | 14:52:11.934 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.934645 | orchestrator | 14:52:11.934 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 14:52:11.934679 | orchestrator | 14:52:11.934 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 14:52:11.934712 | orchestrator | 14:52:11.934 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-07-12 14:52:11.934747 | orchestrator | 14:52:11.934 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 14:52:11.934781 | orchestrator | 14:52:11.934 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 14:52:11.934787 | orchestrator | 14:52:11.934 STDOUT terraform:  } 2025-07-12 14:52:11.934850 | orchestrator | 14:52:11.934 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-07-12 14:52:11.934900 | orchestrator | 14:52:11.934 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-07-12 14:52:11.934928 | orchestrator | 14:52:11.934 STDOUT terraform:  + direction = "ingress" 2025-07-12 14:52:11.934951 | orchestrator | 14:52:11.934 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 14:52:11.934986 | orchestrator | 14:52:11.934 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.935010 | orchestrator | 14:52:11.934 STDOUT terraform:  + protocol = "icmp" 2025-07-12 14:52:11.935047 | orchestrator | 14:52:11.935 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.935078 | orchestrator | 14:52:11.935 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 14:52:11.935112 | orchestrator | 14:52:11.935 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 14:52:11.935139 | orchestrator | 14:52:11.935 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-12 14:52:11.935174 | orchestrator | 14:52:11.935 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 14:52:11.935209 | orchestrator | 14:52:11.935 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 14:52:11.935215 | orchestrator | 14:52:11.935 STDOUT terraform:  } 2025-07-12 14:52:11.935267 | orchestrator | 14:52:11.935 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-07-12 14:52:11.935314 | orchestrator | 14:52:11.935 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-07-12 14:52:11.935342 | orchestrator | 14:52:11.935 STDOUT terraform:  + direction = "ingress" 2025-07-12 14:52:11.935365 | orchestrator | 14:52:11.935 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 14:52:11.935401 | orchestrator | 14:52:11.935 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.935423 | orchestrator | 14:52:11.935 STDOUT terraform:  + protocol = "tcp" 2025-07-12 14:52:11.935459 | orchestrator | 14:52:11.935 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.935492 | orchestrator | 14:52:11.935 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 14:52:11.935527 | orchestrator | 14:52:11.935 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 14:52:11.935555 | orchestrator | 14:52:11.935 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-12 14:52:11.935589 | orchestrator | 14:52:11.935 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 14:52:11.935624 | orchestrator | 14:52:11.935 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 14:52:11.935630 | orchestrator | 14:52:11.935 STDOUT terraform:  } 2025-07-12 14:52:11.935681 | orchestrator | 14:52:11.935 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-07-12 14:52:11.935730 | orchestrator | 14:52:11.935 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-07-12 14:52:11.935757 | orchestrator | 14:52:11.935 STDOUT terraform:  + direction = "ingress" 2025-07-12 14:52:11.935780 | orchestrator | 14:52:11.935 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 14:52:11.935952 | orchestrator | 14:52:11.935 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.936034 | orchestrator | 14:52:11.935 STDOUT terraform:  + protocol = "udp" 2025-07-12 14:52:11.936049 | orchestrator | 14:52:11.935 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.936070 | orchestrator | 14:52:11.935 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 14:52:11.936104 | orchestrator | 14:52:11.935 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 14:52:11.936117 | orchestrator | 14:52:11.935 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-12 14:52:11.936127 | orchestrator | 14:52:11.935 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 14:52:11.936138 | orchestrator | 14:52:11.935 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 14:52:11.936149 | orchestrator | 14:52:11.936 STDOUT terraform:  } 2025-07-12 14:52:11.936165 | orchestrator | 14:52:11.936 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-07-12 14:52:11.936178 | orchestrator | 14:52:11.936 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-07-12 14:52:11.936189 | orchestrator | 14:52:11.936 STDOUT terraform:  + direction = "ingress" 2025-07-12 14:52:11.936200 | orchestrator | 14:52:11.936 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 14:52:11.936215 | orchestrator | 14:52:11.936 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.936226 | orchestrator | 14:52:11.936 STDOUT terraform:  + protocol = "icmp" 2025-07-12 14:52:11.936240 | orchestrator | 14:52:11.936 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.936289 | orchestrator | 14:52:11.936 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 14:52:11.936306 | orchestrator | 14:52:11.936 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 14:52:11.936346 | orchestrator | 14:52:11.936 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-12 14:52:11.936363 | orchestrator | 14:52:11.936 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 14:52:11.936474 | orchestrator | 14:52:11.936 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 14:52:11.936494 | orchestrator | 14:52:11.936 STDOUT terraform:  } 2025-07-12 14:52:11.936500 | orchestrator | 14:52:11.936 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-07-12 14:52:11.936509 | orchestrator | 14:52:11.936 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-07-12 14:52:11.936528 | orchestrator | 14:52:11.936 STDOUT terraform:  + description = "vrrp" 2025-07-12 14:52:11.936552 | orchestrator | 14:52:11.936 STDOUT terraform:  + direction = "ingress" 2025-07-12 14:52:11.936576 | orchestrator | 14:52:11.936 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 14:52:11.936615 | orchestrator | 14:52:11.936 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.936658 | orchestrator | 14:52:11.936 STDOUT terraform:  + protocol = "112" 2025-07-12 14:52:11.936666 | orchestrator | 14:52:11.936 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.936700 | orchestrator | 14:52:11.936 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 14:52:11.936734 | orchestrator | 14:52:11.936 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 14:52:11.936762 | orchestrator | 14:52:11.936 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-12 14:52:11.936798 | orchestrator | 14:52:11.936 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 14:52:11.936849 | orchestrator | 14:52:11.936 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 14:52:11.936856 | orchestrator | 14:52:11.936 STDOUT terraform:  } 2025-07-12 14:52:11.936911 | orchestrator | 14:52:11.936 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-07-12 14:52:11.936957 | orchestrator | 14:52:11.936 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-07-12 14:52:11.936984 | orchestrator | 14:52:11.936 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 14:52:11.937076 | orchestrator | 14:52:11.936 STDOUT terraform:  + description = "management security group" 2025-07-12 14:52:11.937106 | orchestrator | 14:52:11.937 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.937133 | orchestrator | 14:52:11.937 STDOUT terraform:  + name = "testbed-management" 2025-07-12 14:52:11.937161 | orchestrator | 14:52:11.937 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.937188 | orchestrator | 14:52:11.937 STDOUT terraform:  + stateful = (known after apply) 2025-07-12 14:52:11.937216 | orchestrator | 14:52:11.937 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 14:52:11.937222 | orchestrator | 14:52:11.937 STDOUT terraform:  } 2025-07-12 14:52:11.937268 | orchestrator | 14:52:11.937 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-07-12 14:52:11.937316 | orchestrator | 14:52:11.937 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-07-12 14:52:11.937339 | orchestrator | 14:52:11.937 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 14:52:11.937365 | orchestrator | 14:52:11.937 STDOUT terraform:  + description = "node security group" 2025-07-12 14:52:11.937393 | orchestrator | 14:52:11.937 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.937415 | orchestrator | 14:52:11.937 STDOUT terraform:  + name = "testbed-node" 2025-07-12 14:52:11.937442 | orchestrator | 14:52:11.937 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.937468 | orchestrator | 14:52:11.937 STDOUT terraform:  + stateful = (known after apply) 2025-07-12 14:52:11.937495 | orchestrator | 14:52:11.937 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 14:52:11.937501 | orchestrator | 14:52:11.937 STDOUT terraform:  } 2025-07-12 14:52:11.937546 | orchestrator | 14:52:11.937 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-07-12 14:52:11.937589 | orchestrator | 14:52:11.937 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-07-12 14:52:11.937617 | orchestrator | 14:52:11.937 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 14:52:11.937646 | orchestrator | 14:52:11.937 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-07-12 14:52:11.937665 | orchestrator | 14:52:11.937 STDOUT terraform:  + dns_nameservers = [ 2025-07-12 14:52:11.937682 | orchestrator | 14:52:11.937 STDOUT terraform:  + "8.8.8.8", 2025-07-12 14:52:11.937693 | orchestrator | 14:52:11.937 STDOUT terraform:  + "9.9.9.9", 2025-07-12 14:52:11.937703 | orchestrator | 14:52:11.937 STDOUT terraform:  ] 2025-07-12 14:52:11.937723 | orchestrator | 14:52:11.937 STDOUT terraform:  + enable_dhcp = true 2025-07-12 14:52:11.937753 | orchestrator | 14:52:11.937 STDOUT terraform:  + gateway_ip = (known after apply) 2025-07-12 14:52:11.937783 | orchestrator | 14:52:11.937 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.937802 | orchestrator | 14:52:11.937 STDOUT terraform:  + ip_version = 4 2025-07-12 14:52:11.937841 | orchestrator | 14:52:11.937 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-07-12 14:52:11.937870 | orchestrator | 14:52:11.937 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-07-12 14:52:11.937905 | orchestrator | 14:52:11.937 STDOUT terraform:  + name = "subnet-testbed-management" 2025-07-12 14:52:11.937933 | orchestrator | 14:52:11.937 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 14:52:11.937952 | orchestrator | 14:52:11.937 STDOUT terraform:  + no_gateway = false 2025-07-12 14:52:11.937982 | orchestrator | 14:52:11.937 STDOUT terraform:  + region = (known after apply) 2025-07-12 14:52:11.938010 | orchestrator | 14:52:11.937 STDOUT terraform:  + service_types = (known after apply) 2025-07-12 14:52:11.938052 | orchestrator | 14:52:11.938 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 14:52:11.938070 | orchestrator | 14:52:11.938 STDOUT terraform:  + allocation_pool { 2025-07-12 14:52:11.938092 | orchestrator | 14:52:11.938 STDOUT terraform:  + end = "192.168.31.250" 2025-07-12 14:52:11.938116 | orchestrator | 14:52:11.938 STDOUT terraform:  + start = "192.168.31.200" 2025-07-12 14:52:11.938123 | orchestrator | 14:52:11.938 STDOUT terraform:  } 2025-07-12 14:52:11.938139 | orchestrator | 14:52:11.938 STDOUT terraform:  } 2025-07-12 14:52:11.938156 | orchestrator | 14:52:11.938 STDOUT terraform:  # terraform_data.image will be created 2025-07-12 14:52:11.938179 | orchestrator | 14:52:11.938 STDOUT terraform:  + resource "terraform_data" "image" { 2025-07-12 14:52:11.938201 | orchestrator | 14:52:11.938 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.938222 | orchestrator | 14:52:11.938 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-07-12 14:52:11.938250 | orchestrator | 14:52:11.938 STDOUT terraform:  + output = (known after apply) 2025-07-12 14:52:11.938257 | orchestrator | 14:52:11.938 STDOUT terraform:  } 2025-07-12 14:52:11.938286 | orchestrator | 14:52:11.938 STDOUT terraform:  # terraform_data.image_node will be created 2025-07-12 14:52:11.938309 | orchestrator | 14:52:11.938 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-07-12 14:52:11.938333 | orchestrator | 14:52:11.938 STDOUT terraform:  + id = (known after apply) 2025-07-12 14:52:11.938351 | orchestrator | 14:52:11.938 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-07-12 14:52:11.938374 | orchestrator | 14:52:11.938 STDOUT terraform:  + output = (known after apply) 2025-07-12 14:52:11.938387 | orchestrator | 14:52:11.938 STDOUT terraform:  } 2025-07-12 14:52:11.938415 | orchestrator | 14:52:11.938 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-07-12 14:52:11.938426 | orchestrator | 14:52:11.938 STDOUT terraform: Changes to Outputs: 2025-07-12 14:52:11.938449 | orchestrator | 14:52:11.938 STDOUT terraform:  + manager_address = (sensitive value) 2025-07-12 14:52:11.938473 | orchestrator | 14:52:11.938 STDOUT terraform:  + private_key = (sensitive value) 2025-07-12 14:52:12.147208 | orchestrator | 14:52:12.147 STDOUT terraform: terraform_data.image: Creating... 2025-07-12 14:52:12.147280 | orchestrator | 14:52:12.147 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=5ef91b5e-a194-f766-a7cd-31782cc6598c] 2025-07-12 14:52:12.148122 | orchestrator | 14:52:12.148 STDOUT terraform: terraform_data.image_node: Creating... 2025-07-12 14:52:12.149915 | orchestrator | 14:52:12.149 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=18f096d2-c2d6-a14b-d315-49f1cecb2226] 2025-07-12 14:52:12.166978 | orchestrator | 14:52:12.166 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-07-12 14:52:12.175979 | orchestrator | 14:52:12.175 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-07-12 14:52:12.176347 | orchestrator | 14:52:12.176 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-07-12 14:52:12.176413 | orchestrator | 14:52:12.176 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-07-12 14:52:12.179910 | orchestrator | 14:52:12.179 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-07-12 14:52:12.179945 | orchestrator | 14:52:12.179 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-07-12 14:52:12.179950 | orchestrator | 14:52:12.179 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-07-12 14:52:12.179983 | orchestrator | 14:52:12.179 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-07-12 14:52:12.180548 | orchestrator | 14:52:12.180 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-07-12 14:52:12.181672 | orchestrator | 14:52:12.181 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-07-12 14:52:12.648977 | orchestrator | 14:52:12.648 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-07-12 14:52:12.654974 | orchestrator | 14:52:12.654 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-07-12 14:52:12.687661 | orchestrator | 14:52:12.687 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-07-12 14:52:12.695685 | orchestrator | 14:52:12.695 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-07-12 14:52:12.708721 | orchestrator | 14:52:12.708 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-07-12 14:52:12.714547 | orchestrator | 14:52:12.713 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-07-12 14:52:13.313009 | orchestrator | 14:52:13.312 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 0s [id=8b7ad1c8-dc83-4d3a-bb0a-3515fdc85390] 2025-07-12 14:52:13.322511 | orchestrator | 14:52:13.322 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-07-12 14:52:15.796749 | orchestrator | 14:52:15.796 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=4e5b43f9-5557-4a03-9895-8e671249b5b2] 2025-07-12 14:52:15.811093 | orchestrator | 14:52:15.808 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=df26c144-7e2c-487c-9e8f-effdfe3555dd] 2025-07-12 14:52:15.814220 | orchestrator | 14:52:15.814 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-07-12 14:52:15.819136 | orchestrator | 14:52:15.818 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=2d047699-b504-4740-af1d-648b929835be] 2025-07-12 14:52:15.822994 | orchestrator | 14:52:15.822 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=c785386ab428db717e976cbbcdfb6db03c4232d5] 2025-07-12 14:52:15.827891 | orchestrator | 14:52:15.827 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-07-12 14:52:15.828312 | orchestrator | 14:52:15.828 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-07-12 14:52:15.831393 | orchestrator | 14:52:15.831 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=c7167a15c5b41e1de6b51348409c21acf126f776] 2025-07-12 14:52:15.831829 | orchestrator | 14:52:15.831 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-07-12 14:52:15.833220 | orchestrator | 14:52:15.833 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=e2bb8cb1-296e-41d9-9659-79f1ba9bca2a] 2025-07-12 14:52:15.836917 | orchestrator | 14:52:15.836 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-07-12 14:52:15.837255 | orchestrator | 14:52:15.837 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-07-12 14:52:15.840355 | orchestrator | 14:52:15.840 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=6698acfe-c205-405d-be66-12c19a56960d] 2025-07-12 14:52:15.841906 | orchestrator | 14:52:15.841 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=c6699afa-886d-4139-8698-8a8fafe98984] 2025-07-12 14:52:15.847805 | orchestrator | 14:52:15.847 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-07-12 14:52:15.853701 | orchestrator | 14:52:15.849 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-07-12 14:52:15.855405 | orchestrator | 14:52:15.855 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=0aec1d56-840e-4d62-87fc-8ad42993ed21] 2025-07-12 14:52:15.861889 | orchestrator | 14:52:15.861 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-07-12 14:52:15.880107 | orchestrator | 14:52:15.879 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=80301f58-6d09-4d29-bcb1-b411833d1e96] 2025-07-12 14:52:15.936449 | orchestrator | 14:52:15.936 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=9415964e-ba41-448d-be5c-d5fc92ddea3f] 2025-07-12 14:52:16.656155 | orchestrator | 14:52:16.655 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=65f5a376-3b2a-4015-9db5-6b5137fbfa42] 2025-07-12 14:52:16.775361 | orchestrator | 14:52:16.775 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=56c99e3a-78d9-4bb6-b677-fba2b0e57551] 2025-07-12 14:52:16.784871 | orchestrator | 14:52:16.784 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-07-12 14:52:19.257413 | orchestrator | 14:52:19.257 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=934592de-8849-4d55-9151-342b895547cd] 2025-07-12 14:52:19.265806 | orchestrator | 14:52:19.265 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=e3892144-1c31-4d8e-8a84-28397e34627e] 2025-07-12 14:52:19.302644 | orchestrator | 14:52:19.302 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=b361a598-1b86-4f22-9f34-916651b9c093] 2025-07-12 14:52:19.326007 | orchestrator | 14:52:19.325 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=f3e1d17b-8112-49c7-87d4-1e73815fd43e] 2025-07-12 14:52:19.367758 | orchestrator | 14:52:19.367 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=e870d793-04c8-4d31-a748-bbae651abfd8] 2025-07-12 14:52:19.389670 | orchestrator | 14:52:19.389 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=4ba9f296-83b5-4523-b70f-ede13f56d35b] 2025-07-12 14:52:21.289429 | orchestrator | 14:52:21.288 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 4s [id=34f26853-bac5-456a-a696-422d5b463268] 2025-07-12 14:52:21.296271 | orchestrator | 14:52:21.295 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-07-12 14:52:21.297211 | orchestrator | 14:52:21.296 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-07-12 14:52:21.299951 | orchestrator | 14:52:21.299 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-07-12 14:52:21.484811 | orchestrator | 14:52:21.484 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=1576e623-449d-478e-ac58-a07e24ff8f50] 2025-07-12 14:52:21.494994 | orchestrator | 14:52:21.494 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-07-12 14:52:21.499617 | orchestrator | 14:52:21.499 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-07-12 14:52:21.499844 | orchestrator | 14:52:21.499 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-07-12 14:52:21.501135 | orchestrator | 14:52:21.501 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-07-12 14:52:21.501633 | orchestrator | 14:52:21.501 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-07-12 14:52:21.502082 | orchestrator | 14:52:21.501 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-07-12 14:52:21.775807 | orchestrator | 14:52:21.775 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=7d20c1b8-0d84-4e23-bdd2-f02c9f33554a] 2025-07-12 14:52:21.784066 | orchestrator | 14:52:21.783 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-07-12 14:52:21.784874 | orchestrator | 14:52:21.784 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-07-12 14:52:21.785668 | orchestrator | 14:52:21.785 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-07-12 14:52:21.831031 | orchestrator | 14:52:21.830 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=3aa818a7-c0e5-4245-be6c-7f6198e01a06] 2025-07-12 14:52:21.834596 | orchestrator | 14:52:21.834 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-07-12 14:52:21.934764 | orchestrator | 14:52:21.934 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=1401bbbe-b05e-4dc5-a395-52e1f07823ec] 2025-07-12 14:52:21.944752 | orchestrator | 14:52:21.944 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-07-12 14:52:22.004232 | orchestrator | 14:52:22.003 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=deff978d-3044-4d7f-83ac-6ebf9fb9900a] 2025-07-12 14:52:22.024851 | orchestrator | 14:52:22.024 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-07-12 14:52:22.095628 | orchestrator | 14:52:22.095 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=d5e3974c-eb2c-4105-bddc-424682dd4cb3] 2025-07-12 14:52:22.111903 | orchestrator | 14:52:22.111 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-07-12 14:52:22.131420 | orchestrator | 14:52:22.131 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 0s [id=14d43053-6697-40b9-a14f-0d9583703cc0] 2025-07-12 14:52:22.143541 | orchestrator | 14:52:22.143 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-07-12 14:52:22.230800 | orchestrator | 14:52:22.230 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=99a777a8-7f68-4b7b-9953-282b1f850f5d] 2025-07-12 14:52:22.245024 | orchestrator | 14:52:22.244 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-07-12 14:52:22.521074 | orchestrator | 14:52:22.520 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 2s [id=b7b9afc7-1891-4548-8b6c-3d208af325c8] 2025-07-12 14:52:22.535733 | orchestrator | 14:52:22.535 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-07-12 14:52:22.872733 | orchestrator | 14:52:22.872 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=688f5d61-f27d-4ac7-9ce6-f2aad3b7260a] 2025-07-12 14:52:22.879483 | orchestrator | 14:52:22.879 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=c6d7c1fd-cfce-4900-bfcc-ffbf505b8e2a] 2025-07-12 14:52:22.962738 | orchestrator | 14:52:22.962 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=202297b1-d773-43e7-ba54-2e00f0b4196b] 2025-07-12 14:52:23.003483 | orchestrator | 14:52:23.003 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=3e7fc8f0-2e8c-42b8-8496-7c486fcba671] 2025-07-12 14:52:23.181813 | orchestrator | 14:52:23.181 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=431306d7-c4ed-4df2-9e78-ac0459166ee6] 2025-07-12 14:52:23.424067 | orchestrator | 14:52:23.423 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 0s [id=4ddaae55-cec6-4ca5-893c-0d4ee03b591c] 2025-07-12 14:52:23.545227 | orchestrator | 14:52:23.544 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 2s [id=55ae7814-2d16-4b33-9b68-88e814fdff4c] 2025-07-12 14:52:23.567531 | orchestrator | 14:52:23.567 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=3839840c-d2ec-4312-9e61-52f93538f348] 2025-07-12 14:52:23.574475 | orchestrator | 14:52:23.574 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-07-12 14:52:23.824806 | orchestrator | 14:52:23.824 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 2s [id=52146461-10b4-4dcc-811b-86f2b2e02eb7] 2025-07-12 14:52:24.362733 | orchestrator | 14:52:24.362 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 2s [id=aed24db6-4188-4468-ab69-c4f86c3fea3a] 2025-07-12 14:52:24.405432 | orchestrator | 14:52:24.405 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-07-12 14:52:24.409001 | orchestrator | 14:52:24.408 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-07-12 14:52:24.412987 | orchestrator | 14:52:24.412 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-07-12 14:52:24.413311 | orchestrator | 14:52:24.413 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-07-12 14:52:24.413422 | orchestrator | 14:52:24.413 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-07-12 14:52:24.424981 | orchestrator | 14:52:24.424 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-07-12 14:52:25.460501 | orchestrator | 14:52:25.460 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=00238a29-4dee-45c7-8182-36af16613050] 2025-07-12 14:52:25.473195 | orchestrator | 14:52:25.472 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-07-12 14:52:25.474322 | orchestrator | 14:52:25.474 STDOUT terraform: local_file.inventory: Creating... 2025-07-12 14:52:25.480300 | orchestrator | 14:52:25.480 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-07-12 14:52:25.481613 | orchestrator | 14:52:25.481 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=d269ff21a17a7a63d216ec453c73adeb21ca83df] 2025-07-12 14:52:25.483483 | orchestrator | 14:52:25.483 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=5466b265e16a4210d4f8aa19e7d3725a45f2b62c] 2025-07-12 14:52:26.210205 | orchestrator | 14:52:26.209 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=00238a29-4dee-45c7-8182-36af16613050] 2025-07-12 14:52:34.409252 | orchestrator | 14:52:34.408 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-07-12 14:52:34.413415 | orchestrator | 14:52:34.413 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-07-12 14:52:34.417650 | orchestrator | 14:52:34.417 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-07-12 14:52:34.417757 | orchestrator | 14:52:34.417 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-07-12 14:52:34.418672 | orchestrator | 14:52:34.418 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-07-12 14:52:34.426135 | orchestrator | 14:52:34.425 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-07-12 14:52:44.409577 | orchestrator | 14:52:44.409 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-07-12 14:52:44.413820 | orchestrator | 14:52:44.413 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-07-12 14:52:44.418234 | orchestrator | 14:52:44.417 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-07-12 14:52:44.418432 | orchestrator | 14:52:44.418 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-07-12 14:52:44.419585 | orchestrator | 14:52:44.419 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-07-12 14:52:44.426757 | orchestrator | 14:52:44.426 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-07-12 14:52:54.410431 | orchestrator | 14:52:54.409 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-07-12 14:52:54.414315 | orchestrator | 14:52:54.414 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-07-12 14:52:54.419491 | orchestrator | 14:52:54.419 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-07-12 14:52:54.419605 | orchestrator | 14:52:54.419 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-07-12 14:52:54.419741 | orchestrator | 14:52:54.419 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-07-12 14:52:54.427210 | orchestrator | 14:52:54.426 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-07-12 14:52:54.862873 | orchestrator | 14:52:54.862 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=d9c8abef-da85-47f4-8433-3e049632c66a] 2025-07-12 14:52:54.999726 | orchestrator | 14:52:54.999 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=27131fa1-1585-487d-9eb9-a119c397c97d] 2025-07-12 14:52:55.108947 | orchestrator | 14:52:55.108 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=e9bed025-0bca-42a0-9a8a-926bdf424a14] 2025-07-12 14:52:55.119255 | orchestrator | 14:52:55.118 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=a520c93e-54a9-4262-b6fd-f676cc277299] 2025-07-12 14:53:04.410545 | orchestrator | 14:53:04.410 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2025-07-12 14:53:04.420888 | orchestrator | 14:53:04.420 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2025-07-12 14:53:04.958296 | orchestrator | 14:53:04.957 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 41s [id=c903c92f-cfcc-4ab6-bf8c-d7a11df8422a] 2025-07-12 14:53:05.222473 | orchestrator | 14:53:05.222 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 41s [id=94777a3c-38b7-40d2-b0b5-dd71d03296e0] 2025-07-12 14:53:05.251189 | orchestrator | 14:53:05.250 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-07-12 14:53:05.253355 | orchestrator | 14:53:05.253 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-07-12 14:53:05.253586 | orchestrator | 14:53:05.253 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-07-12 14:53:05.259785 | orchestrator | 14:53:05.259 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-07-12 14:53:05.260413 | orchestrator | 14:53:05.260 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=1683546389880278422] 2025-07-12 14:53:05.261457 | orchestrator | 14:53:05.261 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-07-12 14:53:05.261916 | orchestrator | 14:53:05.261 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-07-12 14:53:05.264879 | orchestrator | 14:53:05.264 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-07-12 14:53:05.265037 | orchestrator | 14:53:05.264 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-07-12 14:53:05.282247 | orchestrator | 14:53:05.281 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-07-12 14:53:05.289672 | orchestrator | 14:53:05.289 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-07-12 14:53:05.297299 | orchestrator | 14:53:05.297 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-07-12 14:53:08.664048 | orchestrator | 14:53:08.663 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=94777a3c-38b7-40d2-b0b5-dd71d03296e0/0aec1d56-840e-4d62-87fc-8ad42993ed21] 2025-07-12 14:53:08.665122 | orchestrator | 14:53:08.664 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=27131fa1-1585-487d-9eb9-a119c397c97d/80301f58-6d09-4d29-bcb1-b411833d1e96] 2025-07-12 14:53:08.696733 | orchestrator | 14:53:08.696 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=27131fa1-1585-487d-9eb9-a119c397c97d/df26c144-7e2c-487c-9e8f-effdfe3555dd] 2025-07-12 14:53:08.708472 | orchestrator | 14:53:08.708 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=e9bed025-0bca-42a0-9a8a-926bdf424a14/2d047699-b504-4740-af1d-648b929835be] 2025-07-12 14:53:08.726397 | orchestrator | 14:53:08.725 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=94777a3c-38b7-40d2-b0b5-dd71d03296e0/4e5b43f9-5557-4a03-9895-8e671249b5b2] 2025-07-12 14:53:08.855831 | orchestrator | 14:53:08.855 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=e9bed025-0bca-42a0-9a8a-926bdf424a14/e2bb8cb1-296e-41d9-9659-79f1ba9bca2a] 2025-07-12 14:53:14.797194 | orchestrator | 14:53:14.796 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 10s [id=27131fa1-1585-487d-9eb9-a119c397c97d/9415964e-ba41-448d-be5c-d5fc92ddea3f] 2025-07-12 14:53:14.805680 | orchestrator | 14:53:14.805 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=94777a3c-38b7-40d2-b0b5-dd71d03296e0/c6699afa-886d-4139-8698-8a8fafe98984] 2025-07-12 14:53:14.828682 | orchestrator | 14:53:14.828 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 10s [id=e9bed025-0bca-42a0-9a8a-926bdf424a14/6698acfe-c205-405d-be66-12c19a56960d] 2025-07-12 14:53:15.298112 | orchestrator | 14:53:15.297 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-07-12 14:53:25.299057 | orchestrator | 14:53:25.298 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-07-12 14:53:25.676330 | orchestrator | 14:53:25.675 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=f865cb20-9e6a-4ede-bf74-6e2f068dacda] 2025-07-12 14:53:26.596723 | orchestrator | 14:53:26.596 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-07-12 14:53:26.596838 | orchestrator | 14:53:26.596 STDOUT terraform: Outputs: 2025-07-12 14:53:26.596857 | orchestrator | 14:53:26.596 STDOUT terraform: manager_address = 2025-07-12 14:53:26.596873 | orchestrator | 14:53:26.596 STDOUT terraform: private_key = 2025-07-12 14:53:26.828699 | orchestrator | ok: Runtime: 0:01:23.426106 2025-07-12 14:53:26.873868 | 2025-07-12 14:53:26.874105 | TASK [Fetch manager address] 2025-07-12 14:53:27.349635 | orchestrator | ok 2025-07-12 14:53:27.359809 | 2025-07-12 14:53:27.359953 | TASK [Set manager_host address] 2025-07-12 14:53:27.450689 | orchestrator | ok 2025-07-12 14:53:27.459746 | 2025-07-12 14:53:27.459895 | LOOP [Update ansible collections] 2025-07-12 14:53:29.583598 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-12 14:53:29.584100 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-07-12 14:53:29.584179 | orchestrator | Starting galaxy collection install process 2025-07-12 14:53:29.584216 | orchestrator | Process install dependency map 2025-07-12 14:53:29.584248 | orchestrator | Starting collection install process 2025-07-12 14:53:29.584277 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons' 2025-07-12 14:53:29.584313 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons 2025-07-12 14:53:29.584349 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-07-12 14:53:29.584418 | orchestrator | ok: Item: commons Runtime: 0:00:01.805817 2025-07-12 14:53:30.451363 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-12 14:53:30.451575 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-07-12 14:53:30.451659 | orchestrator | Starting galaxy collection install process 2025-07-12 14:53:30.451729 | orchestrator | Process install dependency map 2025-07-12 14:53:30.451774 | orchestrator | Starting collection install process 2025-07-12 14:53:30.451811 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services' 2025-07-12 14:53:30.451846 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services 2025-07-12 14:53:30.451899 | orchestrator | osism.services:999.0.0 was installed successfully 2025-07-12 14:53:30.451959 | orchestrator | ok: Item: services Runtime: 0:00:00.599315 2025-07-12 14:53:30.474106 | 2025-07-12 14:53:30.474297 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-07-12 14:53:41.087021 | orchestrator | ok 2025-07-12 14:53:41.097875 | 2025-07-12 14:53:41.098046 | TASK [Wait a little longer for the manager so that everything is ready] 2025-07-12 14:54:41.145872 | orchestrator | ok 2025-07-12 14:54:41.163504 | 2025-07-12 14:54:41.163734 | TASK [Fetch manager ssh hostkey] 2025-07-12 14:54:42.747188 | orchestrator | Output suppressed because no_log was given 2025-07-12 14:54:42.762650 | 2025-07-12 14:54:42.762911 | TASK [Get ssh keypair from terraform environment] 2025-07-12 14:54:43.304262 | orchestrator | ok: Runtime: 0:00:00.010179 2025-07-12 14:54:43.321976 | 2025-07-12 14:54:43.322172 | TASK [Point out that the following task takes some time and does not give any output] 2025-07-12 14:54:43.360870 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-07-12 14:54:43.372383 | 2025-07-12 14:54:43.372623 | TASK [Run manager part 0] 2025-07-12 14:54:44.751942 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-12 14:54:44.888480 | orchestrator | 2025-07-12 14:54:44.888566 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-07-12 14:54:44.888586 | orchestrator | 2025-07-12 14:54:44.888617 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-07-12 14:54:46.613877 | orchestrator | ok: [testbed-manager] 2025-07-12 14:54:46.613965 | orchestrator | 2025-07-12 14:54:46.614059 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-07-12 14:54:46.614113 | orchestrator | 2025-07-12 14:54:46.614163 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 14:54:48.454736 | orchestrator | ok: [testbed-manager] 2025-07-12 14:54:48.454782 | orchestrator | 2025-07-12 14:54:48.454790 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-07-12 14:54:49.136428 | orchestrator | ok: [testbed-manager] 2025-07-12 14:54:49.136547 | orchestrator | 2025-07-12 14:54:49.136560 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-07-12 14:54:49.179868 | orchestrator | skipping: [testbed-manager] 2025-07-12 14:54:49.179908 | orchestrator | 2025-07-12 14:54:49.179917 | orchestrator | TASK [Update package cache] **************************************************** 2025-07-12 14:54:49.214501 | orchestrator | skipping: [testbed-manager] 2025-07-12 14:54:49.214544 | orchestrator | 2025-07-12 14:54:49.214551 | orchestrator | TASK [Install required packages] *********************************************** 2025-07-12 14:54:49.242003 | orchestrator | skipping: [testbed-manager] 2025-07-12 14:54:49.242076 | orchestrator | 2025-07-12 14:54:49.242083 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-07-12 14:54:49.269342 | orchestrator | skipping: [testbed-manager] 2025-07-12 14:54:49.269384 | orchestrator | 2025-07-12 14:54:49.269392 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-07-12 14:54:49.300949 | orchestrator | skipping: [testbed-manager] 2025-07-12 14:54:49.300999 | orchestrator | 2025-07-12 14:54:49.301010 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-07-12 14:54:49.336494 | orchestrator | skipping: [testbed-manager] 2025-07-12 14:54:49.336552 | orchestrator | 2025-07-12 14:54:49.336563 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-07-12 14:54:49.371005 | orchestrator | skipping: [testbed-manager] 2025-07-12 14:54:49.371051 | orchestrator | 2025-07-12 14:54:49.371058 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-07-12 14:54:50.195363 | orchestrator | changed: [testbed-manager] 2025-07-12 14:54:50.195444 | orchestrator | 2025-07-12 14:54:50.195460 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-07-12 14:57:57.202400 | orchestrator | changed: [testbed-manager] 2025-07-12 14:57:57.202511 | orchestrator | 2025-07-12 14:57:57.202531 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-07-12 14:59:36.086296 | orchestrator | changed: [testbed-manager] 2025-07-12 14:59:36.086354 | orchestrator | 2025-07-12 14:59:36.086368 | orchestrator | TASK [Install required packages] *********************************************** 2025-07-12 14:59:55.395484 | orchestrator | changed: [testbed-manager] 2025-07-12 14:59:55.396253 | orchestrator | 2025-07-12 14:59:55.396282 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-07-12 15:00:03.954411 | orchestrator | changed: [testbed-manager] 2025-07-12 15:00:03.954480 | orchestrator | 2025-07-12 15:00:03.954495 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-07-12 15:00:04.008628 | orchestrator | ok: [testbed-manager] 2025-07-12 15:00:04.008680 | orchestrator | 2025-07-12 15:00:04.008691 | orchestrator | TASK [Get current user] ******************************************************** 2025-07-12 15:00:04.829832 | orchestrator | ok: [testbed-manager] 2025-07-12 15:00:04.830485 | orchestrator | 2025-07-12 15:00:04.830510 | orchestrator | TASK [Create venv directory] *************************************************** 2025-07-12 15:00:05.605729 | orchestrator | changed: [testbed-manager] 2025-07-12 15:00:05.605799 | orchestrator | 2025-07-12 15:00:05.605817 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-07-12 15:00:11.848190 | orchestrator | changed: [testbed-manager] 2025-07-12 15:00:11.848256 | orchestrator | 2025-07-12 15:00:11.848295 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-07-12 15:00:17.720106 | orchestrator | changed: [testbed-manager] 2025-07-12 15:00:17.720174 | orchestrator | 2025-07-12 15:00:17.720192 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-07-12 15:00:20.393114 | orchestrator | changed: [testbed-manager] 2025-07-12 15:00:20.393198 | orchestrator | 2025-07-12 15:00:20.393214 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-07-12 15:00:22.117925 | orchestrator | changed: [testbed-manager] 2025-07-12 15:00:22.117994 | orchestrator | 2025-07-12 15:00:22.118010 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-07-12 15:00:23.199923 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-07-12 15:00:23.199981 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-07-12 15:00:23.199995 | orchestrator | 2025-07-12 15:00:23.200007 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-07-12 15:00:23.234524 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-07-12 15:00:23.234565 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-07-12 15:00:23.234571 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-07-12 15:00:23.234576 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-07-12 15:00:33.423514 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-07-12 15:00:33.423598 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-07-12 15:00:33.423637 | orchestrator | 2025-07-12 15:00:33.423650 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-07-12 15:00:33.998556 | orchestrator | changed: [testbed-manager] 2025-07-12 15:00:33.998667 | orchestrator | 2025-07-12 15:00:33.998685 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-07-12 15:04:04.006790 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-07-12 15:04:04.006937 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-07-12 15:04:04.006958 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-07-12 15:04:04.006971 | orchestrator | 2025-07-12 15:04:04.006984 | orchestrator | TASK [Install local collections] *********************************************** 2025-07-12 15:04:06.324998 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-07-12 15:04:06.325666 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-07-12 15:04:06.325686 | orchestrator | 2025-07-12 15:04:06.325699 | orchestrator | PLAY [Create operator user] **************************************************** 2025-07-12 15:04:06.325712 | orchestrator | 2025-07-12 15:04:06.325723 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 15:04:07.746233 | orchestrator | ok: [testbed-manager] 2025-07-12 15:04:07.746331 | orchestrator | 2025-07-12 15:04:07.746351 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-07-12 15:04:07.794088 | orchestrator | ok: [testbed-manager] 2025-07-12 15:04:07.794195 | orchestrator | 2025-07-12 15:04:07.794217 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-07-12 15:04:07.871635 | orchestrator | ok: [testbed-manager] 2025-07-12 15:04:07.871712 | orchestrator | 2025-07-12 15:04:07.871727 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-07-12 15:04:08.617127 | orchestrator | changed: [testbed-manager] 2025-07-12 15:04:08.617204 | orchestrator | 2025-07-12 15:04:08.617219 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-07-12 15:04:09.338880 | orchestrator | changed: [testbed-manager] 2025-07-12 15:04:09.339042 | orchestrator | 2025-07-12 15:04:09.339056 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-07-12 15:04:10.673159 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-07-12 15:04:10.673209 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-07-12 15:04:10.673215 | orchestrator | 2025-07-12 15:04:10.673231 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-07-12 15:04:12.069042 | orchestrator | changed: [testbed-manager] 2025-07-12 15:04:12.069158 | orchestrator | 2025-07-12 15:04:12.069176 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-07-12 15:04:13.788458 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-07-12 15:04:13.788553 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-07-12 15:04:13.788572 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-07-12 15:04:13.788583 | orchestrator | 2025-07-12 15:04:13.788596 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-07-12 15:04:13.838245 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:04:13.838298 | orchestrator | 2025-07-12 15:04:13.838304 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-07-12 15:04:14.420664 | orchestrator | changed: [testbed-manager] 2025-07-12 15:04:14.420707 | orchestrator | 2025-07-12 15:04:14.420718 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-07-12 15:04:14.493017 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:04:14.493059 | orchestrator | 2025-07-12 15:04:14.493068 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-07-12 15:04:15.365215 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-12 15:04:15.365295 | orchestrator | changed: [testbed-manager] 2025-07-12 15:04:15.365309 | orchestrator | 2025-07-12 15:04:15.365321 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-07-12 15:04:15.404711 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:04:15.404783 | orchestrator | 2025-07-12 15:04:15.404798 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-07-12 15:04:15.439469 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:04:15.439556 | orchestrator | 2025-07-12 15:04:15.439573 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-07-12 15:04:15.469901 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:04:15.469971 | orchestrator | 2025-07-12 15:04:15.469985 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-07-12 15:04:15.516638 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:04:15.516693 | orchestrator | 2025-07-12 15:04:15.516709 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-07-12 15:04:16.208255 | orchestrator | ok: [testbed-manager] 2025-07-12 15:04:16.208288 | orchestrator | 2025-07-12 15:04:16.208294 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-07-12 15:04:16.208299 | orchestrator | 2025-07-12 15:04:16.208303 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 15:04:17.610575 | orchestrator | ok: [testbed-manager] 2025-07-12 15:04:17.610652 | orchestrator | 2025-07-12 15:04:17.610664 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-07-12 15:04:18.572695 | orchestrator | changed: [testbed-manager] 2025-07-12 15:04:18.572784 | orchestrator | 2025-07-12 15:04:18.572801 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:04:18.572815 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-07-12 15:04:18.572826 | orchestrator | 2025-07-12 15:04:18.750110 | orchestrator | ok: Runtime: 0:09:35.020540 2025-07-12 15:04:18.758458 | 2025-07-12 15:04:18.758536 | TASK [Point out that the log in on the manager is now possible] 2025-07-12 15:04:18.787896 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-07-12 15:04:18.794208 | 2025-07-12 15:04:18.794288 | TASK [Point out that the following task takes some time and does not give any output] 2025-07-12 15:04:18.832890 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-07-12 15:04:18.841824 | 2025-07-12 15:04:18.841974 | TASK [Run manager part 1 + 2] 2025-07-12 15:04:19.728322 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-12 15:04:19.783305 | orchestrator | 2025-07-12 15:04:19.783395 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-07-12 15:04:19.783418 | orchestrator | 2025-07-12 15:04:19.783448 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 15:04:22.667794 | orchestrator | ok: [testbed-manager] 2025-07-12 15:04:22.668515 | orchestrator | 2025-07-12 15:04:22.668587 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-07-12 15:04:22.705729 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:04:22.705803 | orchestrator | 2025-07-12 15:04:22.705822 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-07-12 15:04:22.744126 | orchestrator | ok: [testbed-manager] 2025-07-12 15:04:22.744208 | orchestrator | 2025-07-12 15:04:22.744226 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-07-12 15:04:22.791125 | orchestrator | ok: [testbed-manager] 2025-07-12 15:04:22.791201 | orchestrator | 2025-07-12 15:04:22.791217 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-07-12 15:04:22.863108 | orchestrator | ok: [testbed-manager] 2025-07-12 15:04:22.863188 | orchestrator | 2025-07-12 15:04:22.863205 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-07-12 15:04:22.921812 | orchestrator | ok: [testbed-manager] 2025-07-12 15:04:22.921906 | orchestrator | 2025-07-12 15:04:22.921923 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-07-12 15:04:22.963676 | orchestrator | included: /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-07-12 15:04:22.963719 | orchestrator | 2025-07-12 15:04:22.963726 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-07-12 15:04:23.673083 | orchestrator | ok: [testbed-manager] 2025-07-12 15:04:23.673141 | orchestrator | 2025-07-12 15:04:23.673152 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-07-12 15:04:23.727202 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:04:23.727271 | orchestrator | 2025-07-12 15:04:23.727283 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-07-12 15:04:25.058948 | orchestrator | changed: [testbed-manager] 2025-07-12 15:04:25.059009 | orchestrator | 2025-07-12 15:04:25.059019 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-07-12 15:04:25.643494 | orchestrator | ok: [testbed-manager] 2025-07-12 15:04:25.643550 | orchestrator | 2025-07-12 15:04:25.643558 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-07-12 15:04:26.776234 | orchestrator | changed: [testbed-manager] 2025-07-12 15:04:26.776292 | orchestrator | 2025-07-12 15:04:26.776304 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-07-12 15:04:38.274177 | orchestrator | changed: [testbed-manager] 2025-07-12 15:04:38.274263 | orchestrator | 2025-07-12 15:04:38.274271 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-07-12 15:04:38.900866 | orchestrator | ok: [testbed-manager] 2025-07-12 15:04:38.900945 | orchestrator | 2025-07-12 15:04:38.900961 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-07-12 15:04:38.975218 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:04:38.975265 | orchestrator | 2025-07-12 15:04:38.975272 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-07-12 15:04:39.813563 | orchestrator | changed: [testbed-manager] 2025-07-12 15:04:39.813751 | orchestrator | 2025-07-12 15:04:39.813768 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-07-12 15:04:40.720604 | orchestrator | changed: [testbed-manager] 2025-07-12 15:04:40.720644 | orchestrator | 2025-07-12 15:04:40.720651 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-07-12 15:04:41.295582 | orchestrator | changed: [testbed-manager] 2025-07-12 15:04:41.296512 | orchestrator | 2025-07-12 15:04:41.296534 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-07-12 15:04:41.337465 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-07-12 15:04:41.337590 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-07-12 15:04:41.337607 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-07-12 15:04:41.337620 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-07-12 15:04:43.745538 | orchestrator | changed: [testbed-manager] 2025-07-12 15:04:43.745582 | orchestrator | 2025-07-12 15:04:43.745590 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-07-12 15:04:52.442734 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-07-12 15:04:52.442799 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-07-12 15:04:52.442815 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-07-12 15:04:52.442827 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-07-12 15:04:52.442844 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-07-12 15:04:52.442883 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-07-12 15:04:52.442899 | orchestrator | 2025-07-12 15:04:52.442911 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-07-12 15:04:53.465450 | orchestrator | changed: [testbed-manager] 2025-07-12 15:04:53.465494 | orchestrator | 2025-07-12 15:04:53.465503 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-07-12 15:04:53.505279 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:04:53.505367 | orchestrator | 2025-07-12 15:04:53.505391 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-07-12 15:04:56.574692 | orchestrator | changed: [testbed-manager] 2025-07-12 15:04:56.574784 | orchestrator | 2025-07-12 15:04:56.574843 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-07-12 15:04:56.619232 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:04:56.619316 | orchestrator | 2025-07-12 15:04:56.619341 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-07-12 15:06:31.762520 | orchestrator | changed: [testbed-manager] 2025-07-12 15:06:31.762607 | orchestrator | 2025-07-12 15:06:31.762624 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-07-12 15:06:32.861849 | orchestrator | ok: [testbed-manager] 2025-07-12 15:06:32.861935 | orchestrator | 2025-07-12 15:06:32.861989 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:06:32.862004 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-07-12 15:06:32.862043 | orchestrator | 2025-07-12 15:06:32.997802 | orchestrator | ok: Runtime: 0:02:13.780406 2025-07-12 15:06:33.008312 | 2025-07-12 15:06:33.008434 | TASK [Reboot manager] 2025-07-12 15:06:34.543643 | orchestrator | ok: Runtime: 0:00:00.969402 2025-07-12 15:06:34.560644 | 2025-07-12 15:06:34.560813 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-07-12 15:06:48.293231 | orchestrator | ok 2025-07-12 15:06:48.302237 | 2025-07-12 15:06:48.302363 | TASK [Wait a little longer for the manager so that everything is ready] 2025-07-12 15:07:48.340029 | orchestrator | ok 2025-07-12 15:07:48.348663 | 2025-07-12 15:07:48.348809 | TASK [Deploy manager + bootstrap nodes] 2025-07-12 15:07:50.688333 | orchestrator | 2025-07-12 15:07:50.688529 | orchestrator | # DEPLOY MANAGER 2025-07-12 15:07:50.688552 | orchestrator | 2025-07-12 15:07:50.688567 | orchestrator | + set -e 2025-07-12 15:07:50.688580 | orchestrator | + echo 2025-07-12 15:07:50.688594 | orchestrator | + echo '# DEPLOY MANAGER' 2025-07-12 15:07:50.688611 | orchestrator | + echo 2025-07-12 15:07:50.688661 | orchestrator | + cat /opt/manager-vars.sh 2025-07-12 15:07:50.691636 | orchestrator | export NUMBER_OF_NODES=6 2025-07-12 15:07:50.691663 | orchestrator | 2025-07-12 15:07:50.691676 | orchestrator | export CEPH_VERSION=reef 2025-07-12 15:07:50.691688 | orchestrator | export CONFIGURATION_VERSION=main 2025-07-12 15:07:50.691700 | orchestrator | export MANAGER_VERSION=9.2.0 2025-07-12 15:07:50.691722 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-07-12 15:07:50.691733 | orchestrator | 2025-07-12 15:07:50.691751 | orchestrator | export ARA=false 2025-07-12 15:07:50.691762 | orchestrator | export DEPLOY_MODE=manager 2025-07-12 15:07:50.691779 | orchestrator | export TEMPEST=false 2025-07-12 15:07:50.691791 | orchestrator | export IS_ZUUL=true 2025-07-12 15:07:50.691802 | orchestrator | 2025-07-12 15:07:50.691819 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.204 2025-07-12 15:07:50.691831 | orchestrator | export EXTERNAL_API=false 2025-07-12 15:07:50.691841 | orchestrator | 2025-07-12 15:07:50.691852 | orchestrator | export IMAGE_USER=ubuntu 2025-07-12 15:07:50.691866 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-07-12 15:07:50.691876 | orchestrator | 2025-07-12 15:07:50.691887 | orchestrator | export CEPH_STACK=ceph-ansible 2025-07-12 15:07:50.692177 | orchestrator | 2025-07-12 15:07:50.692194 | orchestrator | + echo 2025-07-12 15:07:50.692207 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-12 15:07:50.693089 | orchestrator | ++ export INTERACTIVE=false 2025-07-12 15:07:50.693115 | orchestrator | ++ INTERACTIVE=false 2025-07-12 15:07:50.693126 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-12 15:07:50.693138 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-12 15:07:50.693378 | orchestrator | + source /opt/manager-vars.sh 2025-07-12 15:07:50.693400 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-12 15:07:50.693411 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-12 15:07:50.693422 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-12 15:07:50.693433 | orchestrator | ++ CEPH_VERSION=reef 2025-07-12 15:07:50.693444 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-12 15:07:50.693455 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-12 15:07:50.693465 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-07-12 15:07:50.693511 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-07-12 15:07:50.693527 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-12 15:07:50.693548 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-12 15:07:50.693560 | orchestrator | ++ export ARA=false 2025-07-12 15:07:50.693571 | orchestrator | ++ ARA=false 2025-07-12 15:07:50.693582 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-12 15:07:50.693670 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-12 15:07:50.693685 | orchestrator | ++ export TEMPEST=false 2025-07-12 15:07:50.693696 | orchestrator | ++ TEMPEST=false 2025-07-12 15:07:50.693711 | orchestrator | ++ export IS_ZUUL=true 2025-07-12 15:07:50.693722 | orchestrator | ++ IS_ZUUL=true 2025-07-12 15:07:50.693805 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.204 2025-07-12 15:07:50.693820 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.204 2025-07-12 15:07:50.693832 | orchestrator | ++ export EXTERNAL_API=false 2025-07-12 15:07:50.693842 | orchestrator | ++ EXTERNAL_API=false 2025-07-12 15:07:50.693882 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-12 15:07:50.693893 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-12 15:07:50.693904 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-12 15:07:50.693946 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-12 15:07:50.693959 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-12 15:07:50.693970 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-12 15:07:50.693985 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-07-12 15:07:50.750947 | orchestrator | + docker version 2025-07-12 15:07:50.999713 | orchestrator | Client: Docker Engine - Community 2025-07-12 15:07:50.999824 | orchestrator | Version: 27.5.1 2025-07-12 15:07:50.999841 | orchestrator | API version: 1.47 2025-07-12 15:07:50.999853 | orchestrator | Go version: go1.22.11 2025-07-12 15:07:50.999864 | orchestrator | Git commit: 9f9e405 2025-07-12 15:07:50.999875 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-07-12 15:07:50.999888 | orchestrator | OS/Arch: linux/amd64 2025-07-12 15:07:50.999899 | orchestrator | Context: default 2025-07-12 15:07:50.999910 | orchestrator | 2025-07-12 15:07:50.999922 | orchestrator | Server: Docker Engine - Community 2025-07-12 15:07:50.999934 | orchestrator | Engine: 2025-07-12 15:07:50.999945 | orchestrator | Version: 27.5.1 2025-07-12 15:07:50.999957 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-07-12 15:07:51.000064 | orchestrator | Go version: go1.22.11 2025-07-12 15:07:51.000081 | orchestrator | Git commit: 4c9b3b0 2025-07-12 15:07:51.000092 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-07-12 15:07:51.000104 | orchestrator | OS/Arch: linux/amd64 2025-07-12 15:07:51.000115 | orchestrator | Experimental: false 2025-07-12 15:07:51.000126 | orchestrator | containerd: 2025-07-12 15:07:51.000138 | orchestrator | Version: 1.7.27 2025-07-12 15:07:51.000149 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-07-12 15:07:51.000160 | orchestrator | runc: 2025-07-12 15:07:51.000171 | orchestrator | Version: 1.2.5 2025-07-12 15:07:51.000183 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-07-12 15:07:51.000194 | orchestrator | docker-init: 2025-07-12 15:07:51.000205 | orchestrator | Version: 0.19.0 2025-07-12 15:07:51.000217 | orchestrator | GitCommit: de40ad0 2025-07-12 15:07:51.002927 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-07-12 15:07:51.012709 | orchestrator | + set -e 2025-07-12 15:07:51.012733 | orchestrator | + source /opt/manager-vars.sh 2025-07-12 15:07:51.012744 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-12 15:07:51.012755 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-12 15:07:51.012766 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-12 15:07:51.012777 | orchestrator | ++ CEPH_VERSION=reef 2025-07-12 15:07:51.012788 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-12 15:07:51.012799 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-12 15:07:51.012809 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-07-12 15:07:51.012820 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-07-12 15:07:51.012836 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-12 15:07:51.012847 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-12 15:07:51.012858 | orchestrator | ++ export ARA=false 2025-07-12 15:07:51.012869 | orchestrator | ++ ARA=false 2025-07-12 15:07:51.012879 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-12 15:07:51.012890 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-12 15:07:51.012900 | orchestrator | ++ export TEMPEST=false 2025-07-12 15:07:51.012911 | orchestrator | ++ TEMPEST=false 2025-07-12 15:07:51.012922 | orchestrator | ++ export IS_ZUUL=true 2025-07-12 15:07:51.012932 | orchestrator | ++ IS_ZUUL=true 2025-07-12 15:07:51.012943 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.204 2025-07-12 15:07:51.012954 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.204 2025-07-12 15:07:51.012965 | orchestrator | ++ export EXTERNAL_API=false 2025-07-12 15:07:51.012975 | orchestrator | ++ EXTERNAL_API=false 2025-07-12 15:07:51.012986 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-12 15:07:51.013029 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-12 15:07:51.013041 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-12 15:07:51.013052 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-12 15:07:51.013062 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-12 15:07:51.013073 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-12 15:07:51.013084 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-12 15:07:51.013099 | orchestrator | ++ export INTERACTIVE=false 2025-07-12 15:07:51.013110 | orchestrator | ++ INTERACTIVE=false 2025-07-12 15:07:51.013121 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-12 15:07:51.013137 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-12 15:07:51.013237 | orchestrator | + [[ 9.2.0 != \l\a\t\e\s\t ]] 2025-07-12 15:07:51.013253 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.2.0 2025-07-12 15:07:51.020249 | orchestrator | + set -e 2025-07-12 15:07:51.020275 | orchestrator | + VERSION=9.2.0 2025-07-12 15:07:51.020287 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.2.0/g' /opt/configuration/environments/manager/configuration.yml 2025-07-12 15:07:51.027870 | orchestrator | + [[ 9.2.0 != \l\a\t\e\s\t ]] 2025-07-12 15:07:51.027897 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-07-12 15:07:51.032050 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-07-12 15:07:51.035411 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-07-12 15:07:51.043035 | orchestrator | /opt/configuration ~ 2025-07-12 15:07:51.043087 | orchestrator | + set -e 2025-07-12 15:07:51.043097 | orchestrator | + pushd /opt/configuration 2025-07-12 15:07:51.043108 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-12 15:07:51.044531 | orchestrator | + source /opt/venv/bin/activate 2025-07-12 15:07:51.045479 | orchestrator | ++ deactivate nondestructive 2025-07-12 15:07:51.045496 | orchestrator | ++ '[' -n '' ']' 2025-07-12 15:07:51.045512 | orchestrator | ++ '[' -n '' ']' 2025-07-12 15:07:51.045548 | orchestrator | ++ hash -r 2025-07-12 15:07:51.045563 | orchestrator | ++ '[' -n '' ']' 2025-07-12 15:07:51.045574 | orchestrator | ++ unset VIRTUAL_ENV 2025-07-12 15:07:51.045585 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-07-12 15:07:51.045596 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-07-12 15:07:51.045807 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-07-12 15:07:51.045823 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-07-12 15:07:51.045834 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-07-12 15:07:51.045844 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-07-12 15:07:51.045856 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-12 15:07:51.045871 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-12 15:07:51.045882 | orchestrator | ++ export PATH 2025-07-12 15:07:51.046047 | orchestrator | ++ '[' -n '' ']' 2025-07-12 15:07:51.046065 | orchestrator | ++ '[' -z '' ']' 2025-07-12 15:07:51.046076 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-07-12 15:07:51.046091 | orchestrator | ++ PS1='(venv) ' 2025-07-12 15:07:51.046102 | orchestrator | ++ export PS1 2025-07-12 15:07:51.046113 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-07-12 15:07:51.046123 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-07-12 15:07:51.046137 | orchestrator | ++ hash -r 2025-07-12 15:07:51.046149 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-07-12 15:07:52.002769 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-07-12 15:07:52.003551 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.4) 2025-07-12 15:07:52.004877 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-07-12 15:07:52.006168 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-07-12 15:07:52.007329 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-07-12 15:07:52.017104 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.2.1) 2025-07-12 15:07:52.018504 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-07-12 15:07:52.019592 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-07-12 15:07:52.020915 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-07-12 15:07:52.049375 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.2) 2025-07-12 15:07:52.050906 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-07-12 15:07:52.052662 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.5.0) 2025-07-12 15:07:52.053918 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.7.9) 2025-07-12 15:07:52.057904 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-07-12 15:07:52.249440 | orchestrator | ++ which gilt 2025-07-12 15:07:52.252993 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-07-12 15:07:52.253067 | orchestrator | + /opt/venv/bin/gilt overlay 2025-07-12 15:07:52.478166 | orchestrator | osism.cfg-generics: 2025-07-12 15:07:52.633585 | orchestrator | - copied (v0.20250709.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-07-12 15:07:52.633688 | orchestrator | - copied (v0.20250709.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-07-12 15:07:52.634199 | orchestrator | - copied (v0.20250709.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-07-12 15:07:52.634221 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-07-12 15:07:53.166171 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-07-12 15:07:53.179032 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-07-12 15:07:53.494154 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-07-12 15:07:53.537749 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-12 15:07:53.537834 | orchestrator | + deactivate 2025-07-12 15:07:53.537849 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-07-12 15:07:53.537862 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-12 15:07:53.537874 | orchestrator | + export PATH 2025-07-12 15:07:53.537885 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-07-12 15:07:53.537897 | orchestrator | + '[' -n '' ']' 2025-07-12 15:07:53.537911 | orchestrator | + hash -r 2025-07-12 15:07:53.537932 | orchestrator | ~ 2025-07-12 15:07:53.537944 | orchestrator | + '[' -n '' ']' 2025-07-12 15:07:53.537955 | orchestrator | + unset VIRTUAL_ENV 2025-07-12 15:07:53.537965 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-07-12 15:07:53.537976 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-07-12 15:07:53.537987 | orchestrator | + unset -f deactivate 2025-07-12 15:07:53.538101 | orchestrator | + popd 2025-07-12 15:07:53.540126 | orchestrator | + [[ 9.2.0 == \l\a\t\e\s\t ]] 2025-07-12 15:07:53.540163 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-07-12 15:07:53.541127 | orchestrator | ++ semver 9.2.0 7.0.0 2025-07-12 15:07:53.596577 | orchestrator | + [[ 1 -ge 0 ]] 2025-07-12 15:07:53.596660 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-07-12 15:07:53.596674 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-07-12 15:07:53.685594 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-12 15:07:53.685695 | orchestrator | + source /opt/venv/bin/activate 2025-07-12 15:07:53.685710 | orchestrator | ++ deactivate nondestructive 2025-07-12 15:07:53.685721 | orchestrator | ++ '[' -n '' ']' 2025-07-12 15:07:53.685733 | orchestrator | ++ '[' -n '' ']' 2025-07-12 15:07:53.685743 | orchestrator | ++ hash -r 2025-07-12 15:07:53.685754 | orchestrator | ++ '[' -n '' ']' 2025-07-12 15:07:53.685765 | orchestrator | ++ unset VIRTUAL_ENV 2025-07-12 15:07:53.685775 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-07-12 15:07:53.685786 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-07-12 15:07:53.685798 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-07-12 15:07:53.685809 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-07-12 15:07:53.685819 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-07-12 15:07:53.685830 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-07-12 15:07:53.685842 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-12 15:07:53.685854 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-12 15:07:53.685900 | orchestrator | ++ export PATH 2025-07-12 15:07:53.685912 | orchestrator | ++ '[' -n '' ']' 2025-07-12 15:07:53.685923 | orchestrator | ++ '[' -z '' ']' 2025-07-12 15:07:53.685933 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-07-12 15:07:53.685944 | orchestrator | ++ PS1='(venv) ' 2025-07-12 15:07:53.685954 | orchestrator | ++ export PS1 2025-07-12 15:07:53.685965 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-07-12 15:07:53.685976 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-07-12 15:07:53.685986 | orchestrator | ++ hash -r 2025-07-12 15:07:53.686084 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-07-12 15:07:54.787929 | orchestrator | 2025-07-12 15:07:54.788086 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-07-12 15:07:54.788106 | orchestrator | 2025-07-12 15:07:54.788118 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-07-12 15:07:55.346178 | orchestrator | ok: [testbed-manager] 2025-07-12 15:07:55.346294 | orchestrator | 2025-07-12 15:07:55.346312 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-07-12 15:07:56.301126 | orchestrator | changed: [testbed-manager] 2025-07-12 15:07:56.301244 | orchestrator | 2025-07-12 15:07:56.301262 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-07-12 15:07:56.301276 | orchestrator | 2025-07-12 15:07:56.301288 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 15:07:58.595798 | orchestrator | ok: [testbed-manager] 2025-07-12 15:07:58.595914 | orchestrator | 2025-07-12 15:07:58.595931 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-07-12 15:07:58.650617 | orchestrator | ok: [testbed-manager] 2025-07-12 15:07:58.650679 | orchestrator | 2025-07-12 15:07:58.650695 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-07-12 15:07:59.115635 | orchestrator | changed: [testbed-manager] 2025-07-12 15:07:59.115726 | orchestrator | 2025-07-12 15:07:59.115743 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-07-12 15:07:59.157572 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:07:59.157649 | orchestrator | 2025-07-12 15:07:59.157663 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-07-12 15:07:59.504373 | orchestrator | changed: [testbed-manager] 2025-07-12 15:07:59.504477 | orchestrator | 2025-07-12 15:07:59.504492 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-07-12 15:07:59.559946 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:07:59.560055 | orchestrator | 2025-07-12 15:07:59.560071 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-07-12 15:07:59.891623 | orchestrator | ok: [testbed-manager] 2025-07-12 15:07:59.891718 | orchestrator | 2025-07-12 15:07:59.891733 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-07-12 15:08:00.004431 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:08:00.004543 | orchestrator | 2025-07-12 15:08:00.004559 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-07-12 15:08:00.004572 | orchestrator | 2025-07-12 15:08:00.004583 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 15:08:01.761767 | orchestrator | ok: [testbed-manager] 2025-07-12 15:08:01.761874 | orchestrator | 2025-07-12 15:08:01.761890 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-07-12 15:08:01.847464 | orchestrator | included: osism.services.traefik for testbed-manager 2025-07-12 15:08:01.847547 | orchestrator | 2025-07-12 15:08:01.847562 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-07-12 15:08:01.912237 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-07-12 15:08:01.912314 | orchestrator | 2025-07-12 15:08:01.912329 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-07-12 15:08:02.981875 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-07-12 15:08:02.981983 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-07-12 15:08:02.982000 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-07-12 15:08:02.982103 | orchestrator | 2025-07-12 15:08:02.982115 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-07-12 15:08:04.746268 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-07-12 15:08:04.746370 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-07-12 15:08:04.746386 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-07-12 15:08:04.746399 | orchestrator | 2025-07-12 15:08:04.746411 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-07-12 15:08:05.384075 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-12 15:08:05.384203 | orchestrator | changed: [testbed-manager] 2025-07-12 15:08:05.384232 | orchestrator | 2025-07-12 15:08:05.384246 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-07-12 15:08:06.004913 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-12 15:08:06.005082 | orchestrator | changed: [testbed-manager] 2025-07-12 15:08:06.005101 | orchestrator | 2025-07-12 15:08:06.005114 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-07-12 15:08:06.063446 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:08:06.063535 | orchestrator | 2025-07-12 15:08:06.063549 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-07-12 15:08:06.429543 | orchestrator | ok: [testbed-manager] 2025-07-12 15:08:06.429644 | orchestrator | 2025-07-12 15:08:06.429659 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-07-12 15:08:06.491731 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-07-12 15:08:06.491812 | orchestrator | 2025-07-12 15:08:06.491820 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-07-12 15:08:07.534444 | orchestrator | changed: [testbed-manager] 2025-07-12 15:08:07.534549 | orchestrator | 2025-07-12 15:08:07.534566 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-07-12 15:08:08.310627 | orchestrator | changed: [testbed-manager] 2025-07-12 15:08:08.310733 | orchestrator | 2025-07-12 15:08:08.310749 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-07-12 15:08:18.930214 | orchestrator | changed: [testbed-manager] 2025-07-12 15:08:18.930324 | orchestrator | 2025-07-12 15:08:18.930363 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-07-12 15:08:18.978373 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:08:18.978459 | orchestrator | 2025-07-12 15:08:18.978472 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-07-12 15:08:18.978483 | orchestrator | 2025-07-12 15:08:18.978494 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 15:08:20.788531 | orchestrator | ok: [testbed-manager] 2025-07-12 15:08:20.788639 | orchestrator | 2025-07-12 15:08:20.788655 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-07-12 15:08:20.900557 | orchestrator | included: osism.services.manager for testbed-manager 2025-07-12 15:08:20.900655 | orchestrator | 2025-07-12 15:08:20.900670 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-07-12 15:08:20.957048 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-07-12 15:08:20.957146 | orchestrator | 2025-07-12 15:08:20.957160 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-07-12 15:08:23.354551 | orchestrator | ok: [testbed-manager] 2025-07-12 15:08:23.354664 | orchestrator | 2025-07-12 15:08:23.354681 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-07-12 15:08:23.406496 | orchestrator | ok: [testbed-manager] 2025-07-12 15:08:23.406593 | orchestrator | 2025-07-12 15:08:23.406607 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-07-12 15:08:23.529436 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-07-12 15:08:23.529532 | orchestrator | 2025-07-12 15:08:23.529548 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-07-12 15:08:26.321202 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-07-12 15:08:26.321309 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-07-12 15:08:26.321324 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-07-12 15:08:26.321337 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-07-12 15:08:26.321348 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-07-12 15:08:26.321359 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-07-12 15:08:26.321370 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-07-12 15:08:26.321381 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-07-12 15:08:26.321392 | orchestrator | 2025-07-12 15:08:26.321406 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-07-12 15:08:26.943985 | orchestrator | changed: [testbed-manager] 2025-07-12 15:08:26.944174 | orchestrator | 2025-07-12 15:08:26.944201 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-07-12 15:08:27.552487 | orchestrator | changed: [testbed-manager] 2025-07-12 15:08:27.552589 | orchestrator | 2025-07-12 15:08:27.552604 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-07-12 15:08:27.612560 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-07-12 15:08:27.612640 | orchestrator | 2025-07-12 15:08:27.612654 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-07-12 15:08:28.789015 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-07-12 15:08:28.789187 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-07-12 15:08:28.789205 | orchestrator | 2025-07-12 15:08:28.789219 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-07-12 15:08:29.391615 | orchestrator | changed: [testbed-manager] 2025-07-12 15:08:29.391721 | orchestrator | 2025-07-12 15:08:29.391737 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-07-12 15:08:29.453869 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:08:29.453955 | orchestrator | 2025-07-12 15:08:29.453969 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-07-12 15:08:29.514534 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-07-12 15:08:29.514612 | orchestrator | 2025-07-12 15:08:29.514625 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-07-12 15:08:30.878279 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-12 15:08:30.878443 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-12 15:08:30.878459 | orchestrator | changed: [testbed-manager] 2025-07-12 15:08:30.878471 | orchestrator | 2025-07-12 15:08:30.878484 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-07-12 15:08:31.509239 | orchestrator | changed: [testbed-manager] 2025-07-12 15:08:31.509349 | orchestrator | 2025-07-12 15:08:31.509382 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-07-12 15:08:31.566839 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:08:31.566940 | orchestrator | 2025-07-12 15:08:31.566954 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-07-12 15:08:31.646894 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-07-12 15:08:31.646989 | orchestrator | 2025-07-12 15:08:31.647004 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-07-12 15:08:32.159653 | orchestrator | changed: [testbed-manager] 2025-07-12 15:08:32.159755 | orchestrator | 2025-07-12 15:08:32.159772 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-07-12 15:08:32.539659 | orchestrator | changed: [testbed-manager] 2025-07-12 15:08:32.539760 | orchestrator | 2025-07-12 15:08:32.539776 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-07-12 15:08:33.733904 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-07-12 15:08:33.734010 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-07-12 15:08:33.734184 | orchestrator | 2025-07-12 15:08:33.734198 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-07-12 15:08:34.381600 | orchestrator | changed: [testbed-manager] 2025-07-12 15:08:34.381714 | orchestrator | 2025-07-12 15:08:34.381730 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-07-12 15:08:34.782932 | orchestrator | ok: [testbed-manager] 2025-07-12 15:08:34.783106 | orchestrator | 2025-07-12 15:08:34.783124 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-07-12 15:08:35.137894 | orchestrator | changed: [testbed-manager] 2025-07-12 15:08:35.137996 | orchestrator | 2025-07-12 15:08:35.138011 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-07-12 15:08:35.185122 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:08:35.185236 | orchestrator | 2025-07-12 15:08:35.185252 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-07-12 15:08:35.259254 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-07-12 15:08:35.259371 | orchestrator | 2025-07-12 15:08:35.259387 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-07-12 15:08:35.296192 | orchestrator | ok: [testbed-manager] 2025-07-12 15:08:35.296283 | orchestrator | 2025-07-12 15:08:35.296297 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-07-12 15:08:37.259104 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-07-12 15:08:37.259237 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-07-12 15:08:37.259254 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-07-12 15:08:37.259266 | orchestrator | 2025-07-12 15:08:37.259278 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-07-12 15:08:37.969620 | orchestrator | changed: [testbed-manager] 2025-07-12 15:08:37.969718 | orchestrator | 2025-07-12 15:08:37.969732 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-07-12 15:08:38.642004 | orchestrator | changed: [testbed-manager] 2025-07-12 15:08:38.642220 | orchestrator | 2025-07-12 15:08:38.642238 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-07-12 15:08:39.332900 | orchestrator | changed: [testbed-manager] 2025-07-12 15:08:39.333009 | orchestrator | 2025-07-12 15:08:39.333025 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-07-12 15:08:39.389635 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-07-12 15:08:39.389731 | orchestrator | 2025-07-12 15:08:39.389748 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-07-12 15:08:39.423889 | orchestrator | ok: [testbed-manager] 2025-07-12 15:08:39.423961 | orchestrator | 2025-07-12 15:08:39.423984 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-07-12 15:08:40.125703 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-07-12 15:08:40.125801 | orchestrator | 2025-07-12 15:08:40.125815 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-07-12 15:08:40.204094 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-07-12 15:08:40.204192 | orchestrator | 2025-07-12 15:08:40.204205 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-07-12 15:08:40.902716 | orchestrator | changed: [testbed-manager] 2025-07-12 15:08:40.902826 | orchestrator | 2025-07-12 15:08:40.902843 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-07-12 15:08:41.508324 | orchestrator | ok: [testbed-manager] 2025-07-12 15:08:41.508424 | orchestrator | 2025-07-12 15:08:41.508439 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-07-12 15:08:41.564605 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:08:41.564725 | orchestrator | 2025-07-12 15:08:41.564751 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-07-12 15:08:41.626814 | orchestrator | ok: [testbed-manager] 2025-07-12 15:08:41.626906 | orchestrator | 2025-07-12 15:08:41.626921 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-07-12 15:08:42.408411 | orchestrator | changed: [testbed-manager] 2025-07-12 15:08:42.408499 | orchestrator | 2025-07-12 15:08:42.408510 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-07-12 15:09:48.256898 | orchestrator | changed: [testbed-manager] 2025-07-12 15:09:48.257017 | orchestrator | 2025-07-12 15:09:48.257033 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-07-12 15:09:49.204672 | orchestrator | ok: [testbed-manager] 2025-07-12 15:09:49.204780 | orchestrator | 2025-07-12 15:09:49.204796 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-07-12 15:09:49.259660 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:09:49.259747 | orchestrator | 2025-07-12 15:09:49.259761 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-07-12 15:09:51.990592 | orchestrator | changed: [testbed-manager] 2025-07-12 15:09:51.990702 | orchestrator | 2025-07-12 15:09:51.990720 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-07-12 15:09:52.051671 | orchestrator | ok: [testbed-manager] 2025-07-12 15:09:52.051773 | orchestrator | 2025-07-12 15:09:52.051790 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-07-12 15:09:52.051803 | orchestrator | 2025-07-12 15:09:52.051815 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-07-12 15:09:52.121621 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:09:52.121728 | orchestrator | 2025-07-12 15:09:52.121774 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-07-12 15:10:52.174318 | orchestrator | Pausing for 60 seconds 2025-07-12 15:10:52.174400 | orchestrator | changed: [testbed-manager] 2025-07-12 15:10:52.174407 | orchestrator | 2025-07-12 15:10:52.174412 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-07-12 15:10:56.274560 | orchestrator | changed: [testbed-manager] 2025-07-12 15:10:56.274669 | orchestrator | 2025-07-12 15:10:56.274685 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-07-12 15:11:37.897578 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-07-12 15:11:37.897708 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-07-12 15:11:37.897727 | orchestrator | changed: [testbed-manager] 2025-07-12 15:11:37.897749 | orchestrator | 2025-07-12 15:11:37.897768 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-07-12 15:11:47.005433 | orchestrator | changed: [testbed-manager] 2025-07-12 15:11:47.005528 | orchestrator | 2025-07-12 15:11:47.005567 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-07-12 15:11:47.084275 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-07-12 15:11:47.084369 | orchestrator | 2025-07-12 15:11:47.084382 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-07-12 15:11:47.084395 | orchestrator | 2025-07-12 15:11:47.084406 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-07-12 15:11:47.132431 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:11:47.132522 | orchestrator | 2025-07-12 15:11:47.132535 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:11:47.132548 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-07-12 15:11:47.132560 | orchestrator | 2025-07-12 15:11:47.230441 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-12 15:11:47.230508 | orchestrator | + deactivate 2025-07-12 15:11:47.230515 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-07-12 15:11:47.230521 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-12 15:11:47.230526 | orchestrator | + export PATH 2025-07-12 15:11:47.230533 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-07-12 15:11:47.230538 | orchestrator | + '[' -n '' ']' 2025-07-12 15:11:47.230542 | orchestrator | + hash -r 2025-07-12 15:11:47.230546 | orchestrator | + '[' -n '' ']' 2025-07-12 15:11:47.230550 | orchestrator | + unset VIRTUAL_ENV 2025-07-12 15:11:47.230554 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-07-12 15:11:47.230558 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-07-12 15:11:47.230562 | orchestrator | + unset -f deactivate 2025-07-12 15:11:47.230566 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-07-12 15:11:47.235575 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-07-12 15:11:47.235585 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-07-12 15:11:47.235589 | orchestrator | + local max_attempts=60 2025-07-12 15:11:47.235593 | orchestrator | + local name=ceph-ansible 2025-07-12 15:11:47.235596 | orchestrator | + local attempt_num=1 2025-07-12 15:11:47.236314 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 15:11:47.265152 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-12 15:11:47.265214 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-07-12 15:11:47.265222 | orchestrator | + local max_attempts=60 2025-07-12 15:11:47.265229 | orchestrator | + local name=kolla-ansible 2025-07-12 15:11:47.265235 | orchestrator | + local attempt_num=1 2025-07-12 15:11:47.266199 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-07-12 15:11:47.307002 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-12 15:11:47.307051 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-07-12 15:11:47.307061 | orchestrator | + local max_attempts=60 2025-07-12 15:11:47.307071 | orchestrator | + local name=osism-ansible 2025-07-12 15:11:47.307080 | orchestrator | + local attempt_num=1 2025-07-12 15:11:47.307809 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-07-12 15:11:47.342856 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-12 15:11:47.342930 | orchestrator | + [[ true == \t\r\u\e ]] 2025-07-12 15:11:47.342943 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-07-12 15:11:47.960194 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-07-12 15:11:48.151977 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-07-12 15:11:48.152048 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20250711.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-07-12 15:11:48.152056 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20250711.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-07-12 15:11:48.152060 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-07-12 15:11:48.152066 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-07-12 15:11:48.152070 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-07-12 15:11:48.152074 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-07-12 15:11:48.152078 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20250711.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 52 seconds (healthy) 2025-07-12 15:11:48.152081 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-07-12 15:11:48.152085 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-07-12 15:11:48.152089 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-07-12 15:11:48.152092 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-07-12 15:11:48.152096 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20250711.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-07-12 15:11:48.152100 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20250711.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-07-12 15:11:48.152103 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-07-12 15:11:48.157704 | orchestrator | ++ semver 9.2.0 7.0.0 2025-07-12 15:11:48.203374 | orchestrator | + [[ 1 -ge 0 ]] 2025-07-12 15:11:48.203443 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-07-12 15:11:48.205688 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-07-12 15:12:00.267208 | orchestrator | 2025-07-12 15:12:00 | INFO  | Task 8d4d393b-095a-4637-879e-63eaf6eaa58c (resolvconf) was prepared for execution. 2025-07-12 15:12:00.267348 | orchestrator | 2025-07-12 15:12:00 | INFO  | It takes a moment until task 8d4d393b-095a-4637-879e-63eaf6eaa58c (resolvconf) has been started and output is visible here. 2025-07-12 15:12:13.327921 | orchestrator | 2025-07-12 15:12:13.328042 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-07-12 15:12:13.328059 | orchestrator | 2025-07-12 15:12:13.328071 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 15:12:13.328084 | orchestrator | Saturday 12 July 2025 15:12:04 +0000 (0:00:00.148) 0:00:00.148 ********* 2025-07-12 15:12:13.328095 | orchestrator | ok: [testbed-manager] 2025-07-12 15:12:13.328106 | orchestrator | 2025-07-12 15:12:13.328118 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-07-12 15:12:13.328129 | orchestrator | Saturday 12 July 2025 15:12:07 +0000 (0:00:03.637) 0:00:03.786 ********* 2025-07-12 15:12:13.328140 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:12:13.328152 | orchestrator | 2025-07-12 15:12:13.328163 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-07-12 15:12:13.328174 | orchestrator | Saturday 12 July 2025 15:12:07 +0000 (0:00:00.053) 0:00:03.840 ********* 2025-07-12 15:12:13.328212 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-07-12 15:12:13.328225 | orchestrator | 2025-07-12 15:12:13.328236 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-07-12 15:12:13.328247 | orchestrator | Saturday 12 July 2025 15:12:07 +0000 (0:00:00.076) 0:00:03.916 ********* 2025-07-12 15:12:13.328258 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-07-12 15:12:13.328269 | orchestrator | 2025-07-12 15:12:13.328280 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-07-12 15:12:13.328291 | orchestrator | Saturday 12 July 2025 15:12:07 +0000 (0:00:00.073) 0:00:03.989 ********* 2025-07-12 15:12:13.328302 | orchestrator | ok: [testbed-manager] 2025-07-12 15:12:13.328313 | orchestrator | 2025-07-12 15:12:13.328324 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-07-12 15:12:13.328335 | orchestrator | Saturday 12 July 2025 15:12:08 +0000 (0:00:00.990) 0:00:04.980 ********* 2025-07-12 15:12:13.328345 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:12:13.328356 | orchestrator | 2025-07-12 15:12:13.328367 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-07-12 15:12:13.328378 | orchestrator | Saturday 12 July 2025 15:12:08 +0000 (0:00:00.047) 0:00:05.028 ********* 2025-07-12 15:12:13.328389 | orchestrator | ok: [testbed-manager] 2025-07-12 15:12:13.328400 | orchestrator | 2025-07-12 15:12:13.328411 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-07-12 15:12:13.328421 | orchestrator | Saturday 12 July 2025 15:12:09 +0000 (0:00:00.469) 0:00:05.497 ********* 2025-07-12 15:12:13.328432 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:12:13.328444 | orchestrator | 2025-07-12 15:12:13.328457 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-07-12 15:12:13.328470 | orchestrator | Saturday 12 July 2025 15:12:09 +0000 (0:00:00.083) 0:00:05.581 ********* 2025-07-12 15:12:13.328482 | orchestrator | changed: [testbed-manager] 2025-07-12 15:12:13.328495 | orchestrator | 2025-07-12 15:12:13.328507 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-07-12 15:12:13.328519 | orchestrator | Saturday 12 July 2025 15:12:09 +0000 (0:00:00.483) 0:00:06.064 ********* 2025-07-12 15:12:13.328531 | orchestrator | changed: [testbed-manager] 2025-07-12 15:12:13.328542 | orchestrator | 2025-07-12 15:12:13.328554 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-07-12 15:12:13.328566 | orchestrator | Saturday 12 July 2025 15:12:11 +0000 (0:00:01.028) 0:00:07.093 ********* 2025-07-12 15:12:13.328599 | orchestrator | ok: [testbed-manager] 2025-07-12 15:12:13.328612 | orchestrator | 2025-07-12 15:12:13.328624 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-07-12 15:12:13.328636 | orchestrator | Saturday 12 July 2025 15:12:11 +0000 (0:00:00.920) 0:00:08.013 ********* 2025-07-12 15:12:13.328648 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-07-12 15:12:13.328660 | orchestrator | 2025-07-12 15:12:13.328672 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-07-12 15:12:13.328684 | orchestrator | Saturday 12 July 2025 15:12:12 +0000 (0:00:00.087) 0:00:08.101 ********* 2025-07-12 15:12:13.328696 | orchestrator | changed: [testbed-manager] 2025-07-12 15:12:13.328707 | orchestrator | 2025-07-12 15:12:13.328729 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:12:13.328742 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 15:12:13.328755 | orchestrator | 2025-07-12 15:12:13.328767 | orchestrator | 2025-07-12 15:12:13.328778 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:12:13.328791 | orchestrator | Saturday 12 July 2025 15:12:13 +0000 (0:00:01.081) 0:00:09.182 ********* 2025-07-12 15:12:13.328803 | orchestrator | =============================================================================== 2025-07-12 15:12:13.328814 | orchestrator | Gathering Facts --------------------------------------------------------- 3.64s 2025-07-12 15:12:13.328824 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.08s 2025-07-12 15:12:13.328835 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.03s 2025-07-12 15:12:13.328846 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 0.99s 2025-07-12 15:12:13.328856 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.92s 2025-07-12 15:12:13.328867 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.48s 2025-07-12 15:12:13.328894 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.47s 2025-07-12 15:12:13.328906 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-07-12 15:12:13.328917 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-07-12 15:12:13.328927 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2025-07-12 15:12:13.328938 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2025-07-12 15:12:13.328948 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.05s 2025-07-12 15:12:13.328959 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2025-07-12 15:12:13.580309 | orchestrator | + osism apply sshconfig 2025-07-12 15:12:25.515697 | orchestrator | 2025-07-12 15:12:25 | INFO  | Task aeb14f7e-b576-471d-990e-53fa33202af7 (sshconfig) was prepared for execution. 2025-07-12 15:12:25.515812 | orchestrator | 2025-07-12 15:12:25 | INFO  | It takes a moment until task aeb14f7e-b576-471d-990e-53fa33202af7 (sshconfig) has been started and output is visible here. 2025-07-12 15:12:35.920727 | orchestrator | 2025-07-12 15:12:35.920843 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-07-12 15:12:35.920859 | orchestrator | 2025-07-12 15:12:35.920870 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-07-12 15:12:35.920882 | orchestrator | Saturday 12 July 2025 15:12:29 +0000 (0:00:00.120) 0:00:00.120 ********* 2025-07-12 15:12:35.920893 | orchestrator | ok: [testbed-manager] 2025-07-12 15:12:35.920904 | orchestrator | 2025-07-12 15:12:35.920915 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-07-12 15:12:35.920958 | orchestrator | Saturday 12 July 2025 15:12:29 +0000 (0:00:00.469) 0:00:00.590 ********* 2025-07-12 15:12:35.920970 | orchestrator | changed: [testbed-manager] 2025-07-12 15:12:35.920982 | orchestrator | 2025-07-12 15:12:35.920993 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-07-12 15:12:35.921003 | orchestrator | Saturday 12 July 2025 15:12:30 +0000 (0:00:00.446) 0:00:01.036 ********* 2025-07-12 15:12:35.921014 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-07-12 15:12:35.921025 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-07-12 15:12:35.921035 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-07-12 15:12:35.921046 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-07-12 15:12:35.921057 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-07-12 15:12:35.921067 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-07-12 15:12:35.921077 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-07-12 15:12:35.921088 | orchestrator | 2025-07-12 15:12:35.921099 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-07-12 15:12:35.921109 | orchestrator | Saturday 12 July 2025 15:12:35 +0000 (0:00:05.074) 0:00:06.111 ********* 2025-07-12 15:12:35.921120 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:12:35.921130 | orchestrator | 2025-07-12 15:12:35.921141 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-07-12 15:12:35.921151 | orchestrator | Saturday 12 July 2025 15:12:35 +0000 (0:00:00.060) 0:00:06.171 ********* 2025-07-12 15:12:35.921162 | orchestrator | changed: [testbed-manager] 2025-07-12 15:12:35.921172 | orchestrator | 2025-07-12 15:12:35.921186 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:12:35.921294 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 15:12:35.921317 | orchestrator | 2025-07-12 15:12:35.921336 | orchestrator | 2025-07-12 15:12:35.921349 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:12:35.921361 | orchestrator | Saturday 12 July 2025 15:12:35 +0000 (0:00:00.570) 0:00:06.742 ********* 2025-07-12 15:12:35.921374 | orchestrator | =============================================================================== 2025-07-12 15:12:35.921386 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.07s 2025-07-12 15:12:35.921398 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.57s 2025-07-12 15:12:35.921410 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.47s 2025-07-12 15:12:35.921421 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.45s 2025-07-12 15:12:35.921434 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.06s 2025-07-12 15:12:36.163602 | orchestrator | + osism apply known-hosts 2025-07-12 15:12:48.052459 | orchestrator | 2025-07-12 15:12:48 | INFO  | Task 42c498a4-fc49-4dcd-b764-5eac901862ee (known-hosts) was prepared for execution. 2025-07-12 15:12:48.052572 | orchestrator | 2025-07-12 15:12:48 | INFO  | It takes a moment until task 42c498a4-fc49-4dcd-b764-5eac901862ee (known-hosts) has been started and output is visible here. 2025-07-12 15:13:04.198167 | orchestrator | 2025-07-12 15:13:04.198311 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-07-12 15:13:04.198330 | orchestrator | 2025-07-12 15:13:04.198342 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-07-12 15:13:04.198354 | orchestrator | Saturday 12 July 2025 15:12:51 +0000 (0:00:00.162) 0:00:00.162 ********* 2025-07-12 15:13:04.198366 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-07-12 15:13:04.198378 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-07-12 15:13:04.198389 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-07-12 15:13:04.198400 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-07-12 15:13:04.198435 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-07-12 15:13:04.198447 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-07-12 15:13:04.198457 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-07-12 15:13:04.198468 | orchestrator | 2025-07-12 15:13:04.198479 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-07-12 15:13:04.198491 | orchestrator | Saturday 12 July 2025 15:12:57 +0000 (0:00:05.924) 0:00:06.087 ********* 2025-07-12 15:13:04.198503 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-07-12 15:13:04.198516 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-07-12 15:13:04.198528 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-07-12 15:13:04.198539 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-07-12 15:13:04.198550 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-07-12 15:13:04.198560 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-07-12 15:13:04.198571 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-07-12 15:13:04.198582 | orchestrator | 2025-07-12 15:13:04.198593 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 15:13:04.198604 | orchestrator | Saturday 12 July 2025 15:12:57 +0000 (0:00:00.159) 0:00:06.247 ********* 2025-07-12 15:13:04.198616 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIpThvW/pVD9ckG8Kx73h0BK/SSiylNk1edGIXVgbESlGRYOg8G0Pv8RaAH7leIe7QwcNq+xGy5FyfIDOV+gnME=) 2025-07-12 15:13:04.198630 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA2ubFz8b/9mHlXzK2fabBCWH3KVdqktfFGX4XxlhM9Q) 2025-07-12 15:13:04.198709 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDynCpdadUeU70HPUDo78pk/1Knv1Ra2zMCLKAa6SY1woqqEzgn+pDGXabQm+0nh2F2kc1PulFezl5OqKMxfusPpeG2psEPfE9SZ0RzFdsq6zsggwNSdT9XSnYuDPF9PAOAOkMz2LaT3oBYCpvsOmpia3ola/F0lk8v89IuMfI74QsPfygx1uKYqdh2LpXkv/QIzpvxiQFPcZpydqdJr3S+EDSRVf7VQVYgUhlJoiy7E5J4jzxjyltc2okcDMEu2SeQ1QtgAoaiCeff6q7LZyf8Zt4B36A0084x/TUFi2W1Avq9BwVoH/w5KAYnOkbP2SP4Z/zmPGgLiD8h1Y8pEDf3sMl+EdKYH3U0ItKBtDZCcajAV/QQ/Ar5fFxUUZsJX+OkrfC8sMMqo3p7hhSEUxA8d7ep54I3uwubZSKj3bs59U88gwdnoahtkwNhWZ/9A6P6x5vq3rDgztDGTQzFCKTvRAw+ksj9I9KfezGRx0VZXfIJbiPkAuu9/DnJVkm9xmc=) 2025-07-12 15:13:04.198727 | orchestrator | 2025-07-12 15:13:04.198740 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 15:13:04.198753 | orchestrator | Saturday 12 July 2025 15:12:59 +0000 (0:00:01.130) 0:00:07.377 ********* 2025-07-12 15:13:04.198766 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJBC9ZS5w8UFKCO+VDBnoDia0wjwoO4Xwb63KxZllob475uJTGH2UFUMxsSrmUCMUU7nEudJ8qsRkgr6h42pStE=) 2025-07-12 15:13:04.198779 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKNgBvX+WK8Ifb0SdTJHgyoM0pq1mmjz8ZwlEuLa553u) 2025-07-12 15:13:04.198827 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCf8AIZiL35YOVc5kq/cnf7gsInQoXhMuc/JITC8Q09abykMWzpwHbyStEcI7XEkoqrTq2lecR5wGgW4Ewsr+AFPgqqX37LNsnSY77Tg64BzkA8cTaZM+OBeBz3UUoNCz3xkzUucx1iXvHYntK6tDGzShjKGfekwVVPQtttTsFgCMlqIt23jfqBpgxYXPuZruhKEM/FfEAZrIT/9t50nMk9HWb5kWJpjAOsgN3O62SBoknoyCE8kVvXjJYKdi1jYsBjB9EM+q7Xmf4C7VAz/StIgQ5i0Q9A20CAUPpzGhZ/o5P0nC4Y64AiT2nibowsAQiTj88p/ZJVPknRZtxkwofulp6kzq1unJqYjKOEOhMW4QUhSfK2M5Q6b5F7//SECh88TCEUk9COZvi1ZZ/2JyCTwZD6yFh4DP/A781VTF/vqW2/mhak2dqQINZsaISVs2fmqCn0m6nKr43lqFIruRqEtcDQRg77+dxpmjONP/1L46jqHlU3tpb+1aqBtAldrZU=) 2025-07-12 15:13:04.198842 | orchestrator | 2025-07-12 15:13:04.198854 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 15:13:04.198867 | orchestrator | Saturday 12 July 2025 15:13:00 +0000 (0:00:01.014) 0:00:08.392 ********* 2025-07-12 15:13:04.198880 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLFWMQeY7zuInMFhW6n4LRwtpeUQMrPENV01WN7/nhQrkWYujygQ+U8g3RcDCsrXjPJi+87a564UpTQI8/Xv188=) 2025-07-12 15:13:04.198893 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZhL1/pwwrqD90yQn/ug31sVkwuoKhRomBa8pjtoHNj7M4oVZHsij4YUATy/7UORnssmS1fOaEKPlrAttrC0XoP1xCRlPwJ8CTgEm7eICgMVrSbc1QX/FGwKnPadGY5mJ1y28w68N7zhx+X0x4JFJ2iAgU3YpjGGXWG7TfMBZehxwUprQ9/d+BOj8UkK6/oC04XCp0yYQ9VfeUrh2cBuilgugU4qPHBI3vj+Ndm8MaQaoECUvkUqjyzwCbNJBZmfleGGkqWUvX9XFPD3akgBJGi6T50nNu7TYk+8aJsB6fqaphy/Mw7WBJIo+BwhJKp7wpOm1ZlruddtQrhPs8a/fFDkmBVq25PRnORLN6n4mNeD228aBbR4v914I4IOhG+RllS/gBzLvXnxbzRSw6tdhzpGlOqMB8Lxm2HtJMRtiaYmljCSW+05FbahdqYo9aDrQxOgGhMy9ObQZyq31irGZOJr/St+98Pu6OzzPjoiekEmilOvmwRZJ22wZNXL5VeYs=) 2025-07-12 15:13:04.198906 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/cIMsLF63A91SzHolucdMkkQwKkNm9HFYXsAaRkkFq) 2025-07-12 15:13:04.198918 | orchestrator | 2025-07-12 15:13:04.198931 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 15:13:04.198943 | orchestrator | Saturday 12 July 2025 15:13:01 +0000 (0:00:01.043) 0:00:09.435 ********* 2025-07-12 15:13:04.198956 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMQ3Di3oppxiojydz397KCNUwY9IQm5kxPhVm/MFvo0nLUMo/EFyIPyQzyWy8gVcebK8hndv36qXVj1KZ9KuRBE=) 2025-07-12 15:13:04.198969 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZKXQ6Hv+fZunwffZF6cj8ulk262cFc64ChdgiCulKlpeeee40lu26fdjCp4WjMbzsYLQGgYmN7AGQSwuYyEeMZdOv849iEoDJPFT2llJOWOJYyK7OoVhxtF2dBf0e0zD1FNajXcK2F7siZT/+lv8aFq6O7F1Vh99bIR57TszmTbPKu7EAPmPYMMrDkqqoUOp3+CvaAm4XqEWGK+lYyy6kGSunv1+piKt57VwYdU8qfKw3TlbE54PGPqy+hevjyjv/5/F7VI9pweyfXXkXznGS5wnliOQmJpxQhWFFL+/IxAp0blKmcDEy8qBSHzhcG/ffIV2ivGFV1deeNz2S8qt5A1IQ7WccWnWzwD1FJU57oeU/seLx9dF4mcIVFTZMF4ZOyRC+RDAHggFuttVlE1/AcQN8Rc6YAmcO+RpaNANNZewTlMKoiEcuF28jIlfe55XRozKWFhJ/Dw/2G/WHrIGkLp1+pJfHs2sL110jeuZKdO0x8g6RYC0DKQAPofqpfNk=) 2025-07-12 15:13:04.198982 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKYSTz0x9XqBwwgW1oKD3hviqeulr2SzwBADIu/URjkm) 2025-07-12 15:13:04.198994 | orchestrator | 2025-07-12 15:13:04.199007 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 15:13:04.199020 | orchestrator | Saturday 12 July 2025 15:13:02 +0000 (0:00:01.026) 0:00:10.462 ********* 2025-07-12 15:13:04.199031 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP7InbJEdAJUlssSmV6idjY0EstPPIOjoQezLxMddLpvIdK3Q3o9o00hZfYtIuNJfQVx8MhmywcInSzwIvhh2jU=) 2025-07-12 15:13:04.199042 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0aAFADZ4Whhkmc2nL4zkwR+gm4qCu2abHeWJWuMktxwuF214WU8kNgMW8Fc+NuvyS/Xvj7FglfhtSqOPNJYTyqFsI03rp2IbYACnVjeGSl83xk00y3jH/N90yqb6/2kjxRrJWTduPv7xpKvVCN8EAwU6SzyrpA9cVSCnF2uvFEBmvlzSTBXyghcUium1FuWmxwHL0/qID8aHcqvrkIzmx4DmyMk6JMJ1g4vaoBAZ+uJVMOfiDfBs5EXf0OSSWqTXj6dVkhHwkPEYm1oTupyF9NTG2khH1T8cEtW5WkmDNn8+ESxBo9bGwM4VF83P7LDRJPLzupGbwfnuWBmA8B2PQCoCw3ptRolnRoPwpz5/GRpjXBWX+XX3tVw41BYyud8XRyqUCVVA+i3egJFC5q+aldkWdbhe20mZO/iyHJ+G1CSveNqlXuKRTWxMHWtbFPS2qfNJIw/XhaEiaUqaRn8kpDSs8icuF6v5wzY8mf7GK84zBFWaUdq6dTBykTdF/Ylk=) 2025-07-12 15:13:04.199071 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICtknqaG+Zwlp2bm3mNUZihCIxZyUSmDPlTGtOMc/42F) 2025-07-12 15:13:04.199083 | orchestrator | 2025-07-12 15:13:04.199094 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 15:13:04.199105 | orchestrator | Saturday 12 July 2025 15:13:03 +0000 (0:00:01.015) 0:00:11.477 ********* 2025-07-12 15:13:04.199124 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCqu9bfZKdSoJgkFIBqQyYnrZCsrUYfmXWFRYHEiTPScxDDdyjhZhIlWT9o5HZKSHCGf321MGbqKRYQ5LXqPgojDAKyoI7KGNVOFxbssLtuSHGutffNLdCkzsJgT4bRKW8R0L6eZ9Msy7rw65SpZJABSH5XcV8asOHXUt1zM0AiiljF0Fs0VL451GcX5jIr9M74y41Tp/lIyyyq9gKEafqGPUq3DXYYR2F8UB+QTWmDt2SRpz9J2kKrO87ccdfGALVpEsV4oT5hgrUW7UTrwAXnttxoauj29Wz/DFV6f/hlClvwYrcgTXRnfrDVSqO6yeBUJB3etRlUmJcb93drA2J7UdZxbhRrBDghNdOdsx215ZQkSgHWPpCE9/hKgujDj5xXTRUnjZ00696+5+QFtGgXefGHa0p5AGi1Pompdw9RMbn4PaAPseWJKTQq6sHnJ2FQ72OhfgFmnv13ikVnWZ4XXr30PrVTc2dqWunFsBYyOQ17pwr6w2yo2DY2EmYyrX8=) 2025-07-12 15:13:14.752322 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMgXkpFZPi738VKZX+1/uosGFC1+A5qVfnmE9O4+sC8hD+XgSIr/Qc2tGxR5b7HABlk+Y7Ntg9+rWVKZF4sACHU=) 2025-07-12 15:13:14.752435 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICCFZm1bIKfU5oUXEB3xmEMBq+n8bZkKI3bncWsWputS) 2025-07-12 15:13:14.752452 | orchestrator | 2025-07-12 15:13:14.752465 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 15:13:14.752478 | orchestrator | Saturday 12 July 2025 15:13:04 +0000 (0:00:01.008) 0:00:12.486 ********* 2025-07-12 15:13:14.752491 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDW7OV+/2NRiFZIborC7yQe4knSxbi3eRJIVqjdl250iSnPsojIuRaXRIgg+i0bj1O+m2sLH0sCWvjBMyePojS+IPbTDLgtmAJO26aXr0+5omsBj3OQbw3dY50tAHfReqeUKT7Xpu1BDfwvv1IzCtJgNB+U/Wvljf5gkNNOve1KbVrXaNRveRYm4JoWz7M/sObUBebvNITWtncDg2R2GXzmBqgYtoeHBvd3zORIYTbeH3tEb++nN5oljm5pXxEQS05hf40MCVJI2eI0IAEpVj3Qi2nAzTUH8i3Mw3ypwwbIrQlnGzd5q2jk6KK8DUIF6PRCeIO6PHccwa+zZSGeOVwhi2iKsGXv6UB1d9bwB3yLcBam9CvE+RVR0ZMr8TftW+tdjhWZ+pb/OHWtBCa6bC0BhykyxXjQAaTFV8jM1PnvPOxo30IHpQ5Sq8UQ7q6a7mkvCLcYpR+9SzGNK/AuTzaKKjJnVdRJjT9Perfvy2hUFVx9rm+pXNEgu0kZKgou59c=) 2025-07-12 15:13:14.752505 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDatUs1Qf180oqyydg9miqYyas2iXnLDzwPAwliCA3hPrBJO+NW6NwjGgo0uKg0D5pvMrW3REXIcBqDeqbmsKUc=) 2025-07-12 15:13:14.752517 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJPaVCv/MMZHibfaVgWYqObMdxRcQZrNslQLBn/rL+wp) 2025-07-12 15:13:14.752528 | orchestrator | 2025-07-12 15:13:14.752539 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-07-12 15:13:14.752551 | orchestrator | Saturday 12 July 2025 15:13:05 +0000 (0:00:01.046) 0:00:13.532 ********* 2025-07-12 15:13:14.752563 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-07-12 15:13:14.752573 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-07-12 15:13:14.752584 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-07-12 15:13:14.752595 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-07-12 15:13:14.752605 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-07-12 15:13:14.752616 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-07-12 15:13:14.752627 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-07-12 15:13:14.752637 | orchestrator | 2025-07-12 15:13:14.752672 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-07-12 15:13:14.752685 | orchestrator | Saturday 12 July 2025 15:13:10 +0000 (0:00:05.192) 0:00:18.725 ********* 2025-07-12 15:13:14.752697 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-07-12 15:13:14.752709 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-07-12 15:13:14.752721 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-07-12 15:13:14.752732 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-07-12 15:13:14.752743 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-07-12 15:13:14.752755 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-07-12 15:13:14.752765 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-07-12 15:13:14.752776 | orchestrator | 2025-07-12 15:13:14.752787 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 15:13:14.752798 | orchestrator | Saturday 12 July 2025 15:13:10 +0000 (0:00:00.163) 0:00:18.889 ********* 2025-07-12 15:13:14.752809 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA2ubFz8b/9mHlXzK2fabBCWH3KVdqktfFGX4XxlhM9Q) 2025-07-12 15:13:14.752865 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDynCpdadUeU70HPUDo78pk/1Knv1Ra2zMCLKAa6SY1woqqEzgn+pDGXabQm+0nh2F2kc1PulFezl5OqKMxfusPpeG2psEPfE9SZ0RzFdsq6zsggwNSdT9XSnYuDPF9PAOAOkMz2LaT3oBYCpvsOmpia3ola/F0lk8v89IuMfI74QsPfygx1uKYqdh2LpXkv/QIzpvxiQFPcZpydqdJr3S+EDSRVf7VQVYgUhlJoiy7E5J4jzxjyltc2okcDMEu2SeQ1QtgAoaiCeff6q7LZyf8Zt4B36A0084x/TUFi2W1Avq9BwVoH/w5KAYnOkbP2SP4Z/zmPGgLiD8h1Y8pEDf3sMl+EdKYH3U0ItKBtDZCcajAV/QQ/Ar5fFxUUZsJX+OkrfC8sMMqo3p7hhSEUxA8d7ep54I3uwubZSKj3bs59U88gwdnoahtkwNhWZ/9A6P6x5vq3rDgztDGTQzFCKTvRAw+ksj9I9KfezGRx0VZXfIJbiPkAuu9/DnJVkm9xmc=) 2025-07-12 15:13:14.752880 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIpThvW/pVD9ckG8Kx73h0BK/SSiylNk1edGIXVgbESlGRYOg8G0Pv8RaAH7leIe7QwcNq+xGy5FyfIDOV+gnME=) 2025-07-12 15:13:14.752892 | orchestrator | 2025-07-12 15:13:14.752905 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 15:13:14.752917 | orchestrator | Saturday 12 July 2025 15:13:11 +0000 (0:00:01.045) 0:00:19.934 ********* 2025-07-12 15:13:14.752935 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKNgBvX+WK8Ifb0SdTJHgyoM0pq1mmjz8ZwlEuLa553u) 2025-07-12 15:13:14.752948 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCf8AIZiL35YOVc5kq/cnf7gsInQoXhMuc/JITC8Q09abykMWzpwHbyStEcI7XEkoqrTq2lecR5wGgW4Ewsr+AFPgqqX37LNsnSY77Tg64BzkA8cTaZM+OBeBz3UUoNCz3xkzUucx1iXvHYntK6tDGzShjKGfekwVVPQtttTsFgCMlqIt23jfqBpgxYXPuZruhKEM/FfEAZrIT/9t50nMk9HWb5kWJpjAOsgN3O62SBoknoyCE8kVvXjJYKdi1jYsBjB9EM+q7Xmf4C7VAz/StIgQ5i0Q9A20CAUPpzGhZ/o5P0nC4Y64AiT2nibowsAQiTj88p/ZJVPknRZtxkwofulp6kzq1unJqYjKOEOhMW4QUhSfK2M5Q6b5F7//SECh88TCEUk9COZvi1ZZ/2JyCTwZD6yFh4DP/A781VTF/vqW2/mhak2dqQINZsaISVs2fmqCn0m6nKr43lqFIruRqEtcDQRg77+dxpmjONP/1L46jqHlU3tpb+1aqBtAldrZU=) 2025-07-12 15:13:14.752969 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJBC9ZS5w8UFKCO+VDBnoDia0wjwoO4Xwb63KxZllob475uJTGH2UFUMxsSrmUCMUU7nEudJ8qsRkgr6h42pStE=) 2025-07-12 15:13:14.752981 | orchestrator | 2025-07-12 15:13:14.752994 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 15:13:14.753006 | orchestrator | Saturday 12 July 2025 15:13:12 +0000 (0:00:00.993) 0:00:20.928 ********* 2025-07-12 15:13:14.753017 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/cIMsLF63A91SzHolucdMkkQwKkNm9HFYXsAaRkkFq) 2025-07-12 15:13:14.753030 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZhL1/pwwrqD90yQn/ug31sVkwuoKhRomBa8pjtoHNj7M4oVZHsij4YUATy/7UORnssmS1fOaEKPlrAttrC0XoP1xCRlPwJ8CTgEm7eICgMVrSbc1QX/FGwKnPadGY5mJ1y28w68N7zhx+X0x4JFJ2iAgU3YpjGGXWG7TfMBZehxwUprQ9/d+BOj8UkK6/oC04XCp0yYQ9VfeUrh2cBuilgugU4qPHBI3vj+Ndm8MaQaoECUvkUqjyzwCbNJBZmfleGGkqWUvX9XFPD3akgBJGi6T50nNu7TYk+8aJsB6fqaphy/Mw7WBJIo+BwhJKp7wpOm1ZlruddtQrhPs8a/fFDkmBVq25PRnORLN6n4mNeD228aBbR4v914I4IOhG+RllS/gBzLvXnxbzRSw6tdhzpGlOqMB8Lxm2HtJMRtiaYmljCSW+05FbahdqYo9aDrQxOgGhMy9ObQZyq31irGZOJr/St+98Pu6OzzPjoiekEmilOvmwRZJ22wZNXL5VeYs=) 2025-07-12 15:13:14.753043 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLFWMQeY7zuInMFhW6n4LRwtpeUQMrPENV01WN7/nhQrkWYujygQ+U8g3RcDCsrXjPJi+87a564UpTQI8/Xv188=) 2025-07-12 15:13:14.753055 | orchestrator | 2025-07-12 15:13:14.753067 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 15:13:14.753078 | orchestrator | Saturday 12 July 2025 15:13:13 +0000 (0:00:01.086) 0:00:22.015 ********* 2025-07-12 15:13:14.753089 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZKXQ6Hv+fZunwffZF6cj8ulk262cFc64ChdgiCulKlpeeee40lu26fdjCp4WjMbzsYLQGgYmN7AGQSwuYyEeMZdOv849iEoDJPFT2llJOWOJYyK7OoVhxtF2dBf0e0zD1FNajXcK2F7siZT/+lv8aFq6O7F1Vh99bIR57TszmTbPKu7EAPmPYMMrDkqqoUOp3+CvaAm4XqEWGK+lYyy6kGSunv1+piKt57VwYdU8qfKw3TlbE54PGPqy+hevjyjv/5/F7VI9pweyfXXkXznGS5wnliOQmJpxQhWFFL+/IxAp0blKmcDEy8qBSHzhcG/ffIV2ivGFV1deeNz2S8qt5A1IQ7WccWnWzwD1FJU57oeU/seLx9dF4mcIVFTZMF4ZOyRC+RDAHggFuttVlE1/AcQN8Rc6YAmcO+RpaNANNZewTlMKoiEcuF28jIlfe55XRozKWFhJ/Dw/2G/WHrIGkLp1+pJfHs2sL110jeuZKdO0x8g6RYC0DKQAPofqpfNk=) 2025-07-12 15:13:14.753101 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMQ3Di3oppxiojydz397KCNUwY9IQm5kxPhVm/MFvo0nLUMo/EFyIPyQzyWy8gVcebK8hndv36qXVj1KZ9KuRBE=) 2025-07-12 15:13:14.753122 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKYSTz0x9XqBwwgW1oKD3hviqeulr2SzwBADIu/URjkm) 2025-07-12 15:13:18.786413 | orchestrator | 2025-07-12 15:13:18.786523 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 15:13:18.786539 | orchestrator | Saturday 12 July 2025 15:13:14 +0000 (0:00:01.026) 0:00:23.042 ********* 2025-07-12 15:13:18.786555 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0aAFADZ4Whhkmc2nL4zkwR+gm4qCu2abHeWJWuMktxwuF214WU8kNgMW8Fc+NuvyS/Xvj7FglfhtSqOPNJYTyqFsI03rp2IbYACnVjeGSl83xk00y3jH/N90yqb6/2kjxRrJWTduPv7xpKvVCN8EAwU6SzyrpA9cVSCnF2uvFEBmvlzSTBXyghcUium1FuWmxwHL0/qID8aHcqvrkIzmx4DmyMk6JMJ1g4vaoBAZ+uJVMOfiDfBs5EXf0OSSWqTXj6dVkhHwkPEYm1oTupyF9NTG2khH1T8cEtW5WkmDNn8+ESxBo9bGwM4VF83P7LDRJPLzupGbwfnuWBmA8B2PQCoCw3ptRolnRoPwpz5/GRpjXBWX+XX3tVw41BYyud8XRyqUCVVA+i3egJFC5q+aldkWdbhe20mZO/iyHJ+G1CSveNqlXuKRTWxMHWtbFPS2qfNJIw/XhaEiaUqaRn8kpDSs8icuF6v5wzY8mf7GK84zBFWaUdq6dTBykTdF/Ylk=) 2025-07-12 15:13:18.786571 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP7InbJEdAJUlssSmV6idjY0EstPPIOjoQezLxMddLpvIdK3Q3o9o00hZfYtIuNJfQVx8MhmywcInSzwIvhh2jU=) 2025-07-12 15:13:18.786610 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICtknqaG+Zwlp2bm3mNUZihCIxZyUSmDPlTGtOMc/42F) 2025-07-12 15:13:18.786623 | orchestrator | 2025-07-12 15:13:18.786634 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 15:13:18.786646 | orchestrator | Saturday 12 July 2025 15:13:15 +0000 (0:00:01.041) 0:00:24.084 ********* 2025-07-12 15:13:18.786657 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICCFZm1bIKfU5oUXEB3xmEMBq+n8bZkKI3bncWsWputS) 2025-07-12 15:13:18.786669 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCqu9bfZKdSoJgkFIBqQyYnrZCsrUYfmXWFRYHEiTPScxDDdyjhZhIlWT9o5HZKSHCGf321MGbqKRYQ5LXqPgojDAKyoI7KGNVOFxbssLtuSHGutffNLdCkzsJgT4bRKW8R0L6eZ9Msy7rw65SpZJABSH5XcV8asOHXUt1zM0AiiljF0Fs0VL451GcX5jIr9M74y41Tp/lIyyyq9gKEafqGPUq3DXYYR2F8UB+QTWmDt2SRpz9J2kKrO87ccdfGALVpEsV4oT5hgrUW7UTrwAXnttxoauj29Wz/DFV6f/hlClvwYrcgTXRnfrDVSqO6yeBUJB3etRlUmJcb93drA2J7UdZxbhRrBDghNdOdsx215ZQkSgHWPpCE9/hKgujDj5xXTRUnjZ00696+5+QFtGgXefGHa0p5AGi1Pompdw9RMbn4PaAPseWJKTQq6sHnJ2FQ72OhfgFmnv13ikVnWZ4XXr30PrVTc2dqWunFsBYyOQ17pwr6w2yo2DY2EmYyrX8=) 2025-07-12 15:13:18.786698 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMgXkpFZPi738VKZX+1/uosGFC1+A5qVfnmE9O4+sC8hD+XgSIr/Qc2tGxR5b7HABlk+Y7Ntg9+rWVKZF4sACHU=) 2025-07-12 15:13:18.786709 | orchestrator | 2025-07-12 15:13:18.786721 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 15:13:18.786732 | orchestrator | Saturday 12 July 2025 15:13:16 +0000 (0:00:01.017) 0:00:25.102 ********* 2025-07-12 15:13:18.786743 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDW7OV+/2NRiFZIborC7yQe4knSxbi3eRJIVqjdl250iSnPsojIuRaXRIgg+i0bj1O+m2sLH0sCWvjBMyePojS+IPbTDLgtmAJO26aXr0+5omsBj3OQbw3dY50tAHfReqeUKT7Xpu1BDfwvv1IzCtJgNB+U/Wvljf5gkNNOve1KbVrXaNRveRYm4JoWz7M/sObUBebvNITWtncDg2R2GXzmBqgYtoeHBvd3zORIYTbeH3tEb++nN5oljm5pXxEQS05hf40MCVJI2eI0IAEpVj3Qi2nAzTUH8i3Mw3ypwwbIrQlnGzd5q2jk6KK8DUIF6PRCeIO6PHccwa+zZSGeOVwhi2iKsGXv6UB1d9bwB3yLcBam9CvE+RVR0ZMr8TftW+tdjhWZ+pb/OHWtBCa6bC0BhykyxXjQAaTFV8jM1PnvPOxo30IHpQ5Sq8UQ7q6a7mkvCLcYpR+9SzGNK/AuTzaKKjJnVdRJjT9Perfvy2hUFVx9rm+pXNEgu0kZKgou59c=) 2025-07-12 15:13:18.786754 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDatUs1Qf180oqyydg9miqYyas2iXnLDzwPAwliCA3hPrBJO+NW6NwjGgo0uKg0D5pvMrW3REXIcBqDeqbmsKUc=) 2025-07-12 15:13:18.786766 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJPaVCv/MMZHibfaVgWYqObMdxRcQZrNslQLBn/rL+wp) 2025-07-12 15:13:18.786777 | orchestrator | 2025-07-12 15:13:18.786788 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-07-12 15:13:18.786799 | orchestrator | Saturday 12 July 2025 15:13:17 +0000 (0:00:01.025) 0:00:26.127 ********* 2025-07-12 15:13:18.786810 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-07-12 15:13:18.786822 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-07-12 15:13:18.786833 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-07-12 15:13:18.786844 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-07-12 15:13:18.786855 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-07-12 15:13:18.786866 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-07-12 15:13:18.786877 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-07-12 15:13:18.786888 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:13:18.786899 | orchestrator | 2025-07-12 15:13:18.786928 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-07-12 15:13:18.786941 | orchestrator | Saturday 12 July 2025 15:13:17 +0000 (0:00:00.165) 0:00:26.292 ********* 2025-07-12 15:13:18.786954 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:13:18.786974 | orchestrator | 2025-07-12 15:13:18.786986 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-07-12 15:13:18.786999 | orchestrator | Saturday 12 July 2025 15:13:18 +0000 (0:00:00.052) 0:00:26.345 ********* 2025-07-12 15:13:18.787011 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:13:18.787023 | orchestrator | 2025-07-12 15:13:18.787035 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-07-12 15:13:18.787047 | orchestrator | Saturday 12 July 2025 15:13:18 +0000 (0:00:00.048) 0:00:26.394 ********* 2025-07-12 15:13:18.787059 | orchestrator | changed: [testbed-manager] 2025-07-12 15:13:18.787069 | orchestrator | 2025-07-12 15:13:18.787080 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:13:18.787091 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 15:13:18.787103 | orchestrator | 2025-07-12 15:13:18.787114 | orchestrator | 2025-07-12 15:13:18.787124 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:13:18.787135 | orchestrator | Saturday 12 July 2025 15:13:18 +0000 (0:00:00.460) 0:00:26.855 ********* 2025-07-12 15:13:18.787146 | orchestrator | =============================================================================== 2025-07-12 15:13:18.787156 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.92s 2025-07-12 15:13:18.787167 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.19s 2025-07-12 15:13:18.787178 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-07-12 15:13:18.787189 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-07-12 15:13:18.787199 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-07-12 15:13:18.787210 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-07-12 15:13:18.787220 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-07-12 15:13:18.787267 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-07-12 15:13:18.787287 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-07-12 15:13:18.787305 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-07-12 15:13:18.787322 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-07-12 15:13:18.787334 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-07-12 15:13:18.787344 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-07-12 15:13:18.787440 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-07-12 15:13:18.787452 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-07-12 15:13:18.787467 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2025-07-12 15:13:18.787486 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.46s 2025-07-12 15:13:18.787504 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2025-07-12 15:13:18.787522 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2025-07-12 15:13:18.787535 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2025-07-12 15:13:19.132870 | orchestrator | + osism apply squid 2025-07-12 15:13:31.000942 | orchestrator | 2025-07-12 15:13:30 | INFO  | Task d84ededa-f4c3-4169-a499-28246cf64355 (squid) was prepared for execution. 2025-07-12 15:13:31.001059 | orchestrator | 2025-07-12 15:13:30 | INFO  | It takes a moment until task d84ededa-f4c3-4169-a499-28246cf64355 (squid) has been started and output is visible here. 2025-07-12 15:15:24.234428 | orchestrator | 2025-07-12 15:15:24.234551 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-07-12 15:15:24.234595 | orchestrator | 2025-07-12 15:15:24.234608 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-07-12 15:15:24.234619 | orchestrator | Saturday 12 July 2025 15:13:34 +0000 (0:00:00.162) 0:00:00.162 ********* 2025-07-12 15:15:24.234630 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-07-12 15:15:24.234642 | orchestrator | 2025-07-12 15:15:24.234653 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-07-12 15:15:24.234664 | orchestrator | Saturday 12 July 2025 15:13:34 +0000 (0:00:00.085) 0:00:00.247 ********* 2025-07-12 15:15:24.234675 | orchestrator | ok: [testbed-manager] 2025-07-12 15:15:24.234687 | orchestrator | 2025-07-12 15:15:24.234698 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-07-12 15:15:24.234709 | orchestrator | Saturday 12 July 2025 15:13:36 +0000 (0:00:01.334) 0:00:01.581 ********* 2025-07-12 15:15:24.234720 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-07-12 15:15:24.234731 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-07-12 15:15:24.234742 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-07-12 15:15:24.234752 | orchestrator | 2025-07-12 15:15:24.234767 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-07-12 15:15:24.234785 | orchestrator | Saturday 12 July 2025 15:13:37 +0000 (0:00:01.093) 0:00:02.675 ********* 2025-07-12 15:15:24.234805 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-07-12 15:15:24.234826 | orchestrator | 2025-07-12 15:15:24.234848 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-07-12 15:15:24.234868 | orchestrator | Saturday 12 July 2025 15:13:38 +0000 (0:00:01.032) 0:00:03.708 ********* 2025-07-12 15:15:24.234887 | orchestrator | ok: [testbed-manager] 2025-07-12 15:15:24.234908 | orchestrator | 2025-07-12 15:15:24.234929 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-07-12 15:15:24.234949 | orchestrator | Saturday 12 July 2025 15:13:38 +0000 (0:00:00.346) 0:00:04.054 ********* 2025-07-12 15:15:24.234970 | orchestrator | changed: [testbed-manager] 2025-07-12 15:15:24.234990 | orchestrator | 2025-07-12 15:15:24.235030 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-07-12 15:15:24.235052 | orchestrator | Saturday 12 July 2025 15:13:39 +0000 (0:00:00.881) 0:00:04.935 ********* 2025-07-12 15:15:24.235071 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-07-12 15:15:24.235115 | orchestrator | ok: [testbed-manager] 2025-07-12 15:15:24.235136 | orchestrator | 2025-07-12 15:15:24.235154 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-07-12 15:15:24.235179 | orchestrator | Saturday 12 July 2025 15:14:10 +0000 (0:00:31.262) 0:00:36.198 ********* 2025-07-12 15:15:24.235198 | orchestrator | changed: [testbed-manager] 2025-07-12 15:15:24.235210 | orchestrator | 2025-07-12 15:15:24.235221 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-07-12 15:15:24.235231 | orchestrator | Saturday 12 July 2025 15:14:23 +0000 (0:00:12.445) 0:00:48.643 ********* 2025-07-12 15:15:24.235242 | orchestrator | Pausing for 60 seconds 2025-07-12 15:15:24.235253 | orchestrator | changed: [testbed-manager] 2025-07-12 15:15:24.235299 | orchestrator | 2025-07-12 15:15:24.235318 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-07-12 15:15:24.235337 | orchestrator | Saturday 12 July 2025 15:15:23 +0000 (0:01:00.068) 0:01:48.712 ********* 2025-07-12 15:15:24.235356 | orchestrator | ok: [testbed-manager] 2025-07-12 15:15:24.235375 | orchestrator | 2025-07-12 15:15:24.235394 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-07-12 15:15:24.235412 | orchestrator | Saturday 12 July 2025 15:15:23 +0000 (0:00:00.071) 0:01:48.784 ********* 2025-07-12 15:15:24.235431 | orchestrator | changed: [testbed-manager] 2025-07-12 15:15:24.235450 | orchestrator | 2025-07-12 15:15:24.235481 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:15:24.235501 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:15:24.235519 | orchestrator | 2025-07-12 15:15:24.235539 | orchestrator | 2025-07-12 15:15:24.235559 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:15:24.235577 | orchestrator | Saturday 12 July 2025 15:15:24 +0000 (0:00:00.607) 0:01:49.392 ********* 2025-07-12 15:15:24.235595 | orchestrator | =============================================================================== 2025-07-12 15:15:24.235613 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2025-07-12 15:15:24.235632 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.26s 2025-07-12 15:15:24.235650 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.45s 2025-07-12 15:15:24.235668 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.33s 2025-07-12 15:15:24.235687 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.09s 2025-07-12 15:15:24.235705 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.03s 2025-07-12 15:15:24.235724 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.88s 2025-07-12 15:15:24.235741 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.61s 2025-07-12 15:15:24.235760 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.35s 2025-07-12 15:15:24.235778 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2025-07-12 15:15:24.235799 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-07-12 15:15:24.481702 | orchestrator | + [[ 9.2.0 != \l\a\t\e\s\t ]] 2025-07-12 15:15:24.481901 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-07-12 15:15:24.486716 | orchestrator | ++ semver 9.2.0 9.0.0 2025-07-12 15:15:24.541348 | orchestrator | + [[ 1 -lt 0 ]] 2025-07-12 15:15:24.541421 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-07-12 15:15:36.404399 | orchestrator | 2025-07-12 15:15:36 | INFO  | Task c3110bd5-ee33-43a5-8b15-628b7e46b8bc (operator) was prepared for execution. 2025-07-12 15:15:36.404503 | orchestrator | 2025-07-12 15:15:36 | INFO  | It takes a moment until task c3110bd5-ee33-43a5-8b15-628b7e46b8bc (operator) has been started and output is visible here. 2025-07-12 15:15:51.717980 | orchestrator | 2025-07-12 15:15:51.718167 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-07-12 15:15:51.718186 | orchestrator | 2025-07-12 15:15:51.718198 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 15:15:51.718209 | orchestrator | Saturday 12 July 2025 15:15:40 +0000 (0:00:00.144) 0:00:00.144 ********* 2025-07-12 15:15:51.718221 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:15:51.718233 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:15:51.718243 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:15:51.718254 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:15:51.718297 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:15:51.718308 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:15:51.718319 | orchestrator | 2025-07-12 15:15:51.718330 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-07-12 15:15:51.718341 | orchestrator | Saturday 12 July 2025 15:15:43 +0000 (0:00:03.304) 0:00:03.449 ********* 2025-07-12 15:15:51.718352 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:15:51.718363 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:15:51.718373 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:15:51.718384 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:15:51.718394 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:15:51.718405 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:15:51.718415 | orchestrator | 2025-07-12 15:15:51.718426 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-07-12 15:15:51.718457 | orchestrator | 2025-07-12 15:15:51.718469 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-07-12 15:15:51.718480 | orchestrator | Saturday 12 July 2025 15:15:44 +0000 (0:00:00.868) 0:00:04.317 ********* 2025-07-12 15:15:51.718490 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:15:51.718501 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:15:51.718511 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:15:51.718521 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:15:51.718532 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:15:51.718542 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:15:51.718552 | orchestrator | 2025-07-12 15:15:51.718563 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-07-12 15:15:51.718574 | orchestrator | Saturday 12 July 2025 15:15:44 +0000 (0:00:00.153) 0:00:04.471 ********* 2025-07-12 15:15:51.718584 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:15:51.718595 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:15:51.718605 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:15:51.718616 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:15:51.718626 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:15:51.718637 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:15:51.718647 | orchestrator | 2025-07-12 15:15:51.718658 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-07-12 15:15:51.718669 | orchestrator | Saturday 12 July 2025 15:15:44 +0000 (0:00:00.155) 0:00:04.626 ********* 2025-07-12 15:15:51.718680 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:15:51.718692 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:15:51.718702 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:15:51.718713 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:15:51.718724 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:15:51.718734 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:15:51.718745 | orchestrator | 2025-07-12 15:15:51.718755 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-07-12 15:15:51.718766 | orchestrator | Saturday 12 July 2025 15:15:45 +0000 (0:00:00.563) 0:00:05.190 ********* 2025-07-12 15:15:51.718777 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:15:51.718787 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:15:51.718798 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:15:51.718809 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:15:51.718819 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:15:51.718830 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:15:51.718840 | orchestrator | 2025-07-12 15:15:51.718851 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-07-12 15:15:51.718862 | orchestrator | Saturday 12 July 2025 15:15:46 +0000 (0:00:00.823) 0:00:06.014 ********* 2025-07-12 15:15:51.718873 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-07-12 15:15:51.718884 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-07-12 15:15:51.718894 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-07-12 15:15:51.718905 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-07-12 15:15:51.718915 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-07-12 15:15:51.718926 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-07-12 15:15:51.718936 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-07-12 15:15:51.718946 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-07-12 15:15:51.718957 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-07-12 15:15:51.718967 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-07-12 15:15:51.718978 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-07-12 15:15:51.718988 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-07-12 15:15:51.718999 | orchestrator | 2025-07-12 15:15:51.719010 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-07-12 15:15:51.719021 | orchestrator | Saturday 12 July 2025 15:15:47 +0000 (0:00:01.148) 0:00:07.163 ********* 2025-07-12 15:15:51.719032 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:15:51.719051 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:15:51.719061 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:15:51.719072 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:15:51.719082 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:15:51.719097 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:15:51.719108 | orchestrator | 2025-07-12 15:15:51.719119 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-07-12 15:15:51.719131 | orchestrator | Saturday 12 July 2025 15:15:48 +0000 (0:00:01.177) 0:00:08.340 ********* 2025-07-12 15:15:51.719141 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-07-12 15:15:51.719152 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-07-12 15:15:51.719162 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-07-12 15:15:51.719173 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-07-12 15:15:51.719202 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-07-12 15:15:51.719213 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-07-12 15:15:51.719224 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-07-12 15:15:51.719234 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-07-12 15:15:51.719290 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-07-12 15:15:51.719302 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-07-12 15:15:51.719313 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-07-12 15:15:51.719324 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-07-12 15:15:51.719334 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-07-12 15:15:51.719345 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-07-12 15:15:51.719355 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-07-12 15:15:51.719366 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-07-12 15:15:51.719376 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-07-12 15:15:51.719387 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-07-12 15:15:51.719398 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-07-12 15:15:51.719408 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-07-12 15:15:51.719419 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-07-12 15:15:51.719429 | orchestrator | 2025-07-12 15:15:51.719440 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-07-12 15:15:51.719470 | orchestrator | Saturday 12 July 2025 15:15:49 +0000 (0:00:01.258) 0:00:09.598 ********* 2025-07-12 15:15:51.719481 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:15:51.719492 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:15:51.719502 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:15:51.719513 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:15:51.719528 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:15:51.719539 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:15:51.719549 | orchestrator | 2025-07-12 15:15:51.719560 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-07-12 15:15:51.719571 | orchestrator | Saturday 12 July 2025 15:15:49 +0000 (0:00:00.149) 0:00:09.748 ********* 2025-07-12 15:15:51.719582 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:15:51.719593 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:15:51.719603 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:15:51.719614 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:15:51.719625 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:15:51.719635 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:15:51.719646 | orchestrator | 2025-07-12 15:15:51.719665 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-07-12 15:15:51.719676 | orchestrator | Saturday 12 July 2025 15:15:50 +0000 (0:00:00.635) 0:00:10.384 ********* 2025-07-12 15:15:51.719687 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:15:51.719697 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:15:51.719708 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:15:51.719719 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:15:51.719729 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:15:51.719740 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:15:51.719751 | orchestrator | 2025-07-12 15:15:51.719762 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-07-12 15:15:51.719772 | orchestrator | Saturday 12 July 2025 15:15:50 +0000 (0:00:00.172) 0:00:10.556 ********* 2025-07-12 15:15:51.719783 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 15:15:51.719794 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:15:51.719804 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-12 15:15:51.719815 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-12 15:15:51.719825 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:15:51.719844 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:15:51.719865 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-12 15:15:51.719884 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:15:51.719904 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-07-12 15:15:51.719920 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-07-12 15:15:51.719937 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:15:51.719954 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:15:51.719973 | orchestrator | 2025-07-12 15:15:51.720012 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-07-12 15:15:51.720032 | orchestrator | Saturday 12 July 2025 15:15:51 +0000 (0:00:00.713) 0:00:11.269 ********* 2025-07-12 15:15:51.720049 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:15:51.720060 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:15:51.720070 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:15:51.720081 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:15:51.720092 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:15:51.720102 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:15:51.720113 | orchestrator | 2025-07-12 15:15:51.720123 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-07-12 15:15:51.720134 | orchestrator | Saturday 12 July 2025 15:15:51 +0000 (0:00:00.132) 0:00:11.402 ********* 2025-07-12 15:15:51.720145 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:15:51.720156 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:15:51.720167 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:15:51.720177 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:15:51.720188 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:15:51.720199 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:15:51.720209 | orchestrator | 2025-07-12 15:15:51.720220 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-07-12 15:15:51.720231 | orchestrator | Saturday 12 July 2025 15:15:51 +0000 (0:00:00.145) 0:00:11.548 ********* 2025-07-12 15:15:51.720241 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:15:51.720252 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:15:51.720309 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:15:51.720320 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:15:51.720342 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:15:52.749517 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:15:52.749623 | orchestrator | 2025-07-12 15:15:52.749639 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-07-12 15:15:52.749653 | orchestrator | Saturday 12 July 2025 15:15:51 +0000 (0:00:00.145) 0:00:11.693 ********* 2025-07-12 15:15:52.749665 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:15:52.749675 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:15:52.749686 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:15:52.749722 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:15:52.749733 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:15:52.749744 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:15:52.749754 | orchestrator | 2025-07-12 15:15:52.749765 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-07-12 15:15:52.749776 | orchestrator | Saturday 12 July 2025 15:15:52 +0000 (0:00:00.621) 0:00:12.315 ********* 2025-07-12 15:15:52.749787 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:15:52.749798 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:15:52.749808 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:15:52.749819 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:15:52.749829 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:15:52.749840 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:15:52.749850 | orchestrator | 2025-07-12 15:15:52.749861 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:15:52.749873 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 15:15:52.749886 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 15:15:52.749897 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 15:15:52.749908 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 15:15:52.749919 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 15:15:52.749929 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 15:15:52.749940 | orchestrator | 2025-07-12 15:15:52.749951 | orchestrator | 2025-07-12 15:15:52.749962 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:15:52.749972 | orchestrator | Saturday 12 July 2025 15:15:52 +0000 (0:00:00.204) 0:00:12.520 ********* 2025-07-12 15:15:52.749983 | orchestrator | =============================================================================== 2025-07-12 15:15:52.749993 | orchestrator | Gathering Facts --------------------------------------------------------- 3.30s 2025-07-12 15:15:52.750004 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.26s 2025-07-12 15:15:52.750077 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.18s 2025-07-12 15:15:52.750092 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.15s 2025-07-12 15:15:52.750104 | orchestrator | Do not require tty for all users ---------------------------------------- 0.87s 2025-07-12 15:15:52.750116 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.82s 2025-07-12 15:15:52.750129 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.71s 2025-07-12 15:15:52.750141 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.64s 2025-07-12 15:15:52.750153 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.62s 2025-07-12 15:15:52.750164 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.56s 2025-07-12 15:15:52.750177 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.20s 2025-07-12 15:15:52.750188 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.17s 2025-07-12 15:15:52.750200 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.16s 2025-07-12 15:15:52.750212 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.15s 2025-07-12 15:15:52.750233 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.15s 2025-07-12 15:15:52.750245 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.15s 2025-07-12 15:15:52.750285 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.15s 2025-07-12 15:15:52.750303 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.13s 2025-07-12 15:15:52.986404 | orchestrator | + osism apply --environment custom facts 2025-07-12 15:15:54.727780 | orchestrator | 2025-07-12 15:15:54 | INFO  | Trying to run play facts in environment custom 2025-07-12 15:16:04.841246 | orchestrator | 2025-07-12 15:16:04 | INFO  | Task ce5adad3-5ea6-4763-953b-158874227572 (facts) was prepared for execution. 2025-07-12 15:16:04.841410 | orchestrator | 2025-07-12 15:16:04 | INFO  | It takes a moment until task ce5adad3-5ea6-4763-953b-158874227572 (facts) has been started and output is visible here. 2025-07-12 15:16:45.869180 | orchestrator | 2025-07-12 15:16:45.869367 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-07-12 15:16:45.869385 | orchestrator | 2025-07-12 15:16:45.869397 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-07-12 15:16:45.869409 | orchestrator | Saturday 12 July 2025 15:16:08 +0000 (0:00:00.082) 0:00:00.082 ********* 2025-07-12 15:16:45.869420 | orchestrator | ok: [testbed-manager] 2025-07-12 15:16:45.869432 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:16:45.869443 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:16:45.869474 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:16:45.869485 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:16:45.869496 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:16:45.869507 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:16:45.869518 | orchestrator | 2025-07-12 15:16:45.869529 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-07-12 15:16:45.869540 | orchestrator | Saturday 12 July 2025 15:16:10 +0000 (0:00:01.495) 0:00:01.578 ********* 2025-07-12 15:16:45.869551 | orchestrator | ok: [testbed-manager] 2025-07-12 15:16:45.869562 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:16:45.869583 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:16:45.869596 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:16:45.869607 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:16:45.869618 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:16:45.869629 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:16:45.869639 | orchestrator | 2025-07-12 15:16:45.869650 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-07-12 15:16:45.869661 | orchestrator | 2025-07-12 15:16:45.869672 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-07-12 15:16:45.869682 | orchestrator | Saturday 12 July 2025 15:16:11 +0000 (0:00:01.173) 0:00:02.752 ********* 2025-07-12 15:16:45.869693 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:16:45.869704 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:16:45.869715 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:16:45.869725 | orchestrator | 2025-07-12 15:16:45.869736 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-07-12 15:16:45.869748 | orchestrator | Saturday 12 July 2025 15:16:11 +0000 (0:00:00.099) 0:00:02.852 ********* 2025-07-12 15:16:45.869759 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:16:45.869769 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:16:45.869786 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:16:45.869796 | orchestrator | 2025-07-12 15:16:45.869807 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-07-12 15:16:45.869818 | orchestrator | Saturday 12 July 2025 15:16:11 +0000 (0:00:00.197) 0:00:03.049 ********* 2025-07-12 15:16:45.869829 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:16:45.869839 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:16:45.869850 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:16:45.869861 | orchestrator | 2025-07-12 15:16:45.869871 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-07-12 15:16:45.869904 | orchestrator | Saturday 12 July 2025 15:16:11 +0000 (0:00:00.198) 0:00:03.247 ********* 2025-07-12 15:16:45.869917 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:16:45.869929 | orchestrator | 2025-07-12 15:16:45.869940 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-07-12 15:16:45.869950 | orchestrator | Saturday 12 July 2025 15:16:11 +0000 (0:00:00.125) 0:00:03.373 ********* 2025-07-12 15:16:45.869961 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:16:45.869971 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:16:45.869982 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:16:45.869992 | orchestrator | 2025-07-12 15:16:45.870003 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-07-12 15:16:45.870014 | orchestrator | Saturday 12 July 2025 15:16:12 +0000 (0:00:00.413) 0:00:03.786 ********* 2025-07-12 15:16:45.870085 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:16:45.870097 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:16:45.870107 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:16:45.870118 | orchestrator | 2025-07-12 15:16:45.870128 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-07-12 15:16:45.870139 | orchestrator | Saturday 12 July 2025 15:16:12 +0000 (0:00:00.101) 0:00:03.887 ********* 2025-07-12 15:16:45.870150 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:16:45.870160 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:16:45.870180 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:16:45.870191 | orchestrator | 2025-07-12 15:16:45.870202 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-07-12 15:16:45.870212 | orchestrator | Saturday 12 July 2025 15:16:13 +0000 (0:00:01.025) 0:00:04.913 ********* 2025-07-12 15:16:45.870223 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:16:45.870234 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:16:45.870245 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:16:45.870255 | orchestrator | 2025-07-12 15:16:45.870289 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-07-12 15:16:45.870311 | orchestrator | Saturday 12 July 2025 15:16:13 +0000 (0:00:00.480) 0:00:05.394 ********* 2025-07-12 15:16:45.870330 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:16:45.870348 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:16:45.870366 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:16:45.870386 | orchestrator | 2025-07-12 15:16:45.870406 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-07-12 15:16:45.870424 | orchestrator | Saturday 12 July 2025 15:16:14 +0000 (0:00:01.054) 0:00:06.448 ********* 2025-07-12 15:16:45.870435 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:16:45.870445 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:16:45.870456 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:16:45.870467 | orchestrator | 2025-07-12 15:16:45.870477 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-07-12 15:16:45.870488 | orchestrator | Saturday 12 July 2025 15:16:28 +0000 (0:00:14.011) 0:00:20.459 ********* 2025-07-12 15:16:45.870498 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:16:45.870509 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:16:45.870520 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:16:45.870530 | orchestrator | 2025-07-12 15:16:45.870541 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-07-12 15:16:45.870573 | orchestrator | Saturday 12 July 2025 15:16:29 +0000 (0:00:00.110) 0:00:20.570 ********* 2025-07-12 15:16:45.870584 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:16:45.870595 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:16:45.870605 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:16:45.870616 | orchestrator | 2025-07-12 15:16:45.870626 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-07-12 15:16:45.870648 | orchestrator | Saturday 12 July 2025 15:16:36 +0000 (0:00:07.375) 0:00:27.946 ********* 2025-07-12 15:16:45.870659 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:16:45.870669 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:16:45.870680 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:16:45.870690 | orchestrator | 2025-07-12 15:16:45.870701 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-07-12 15:16:45.870712 | orchestrator | Saturday 12 July 2025 15:16:36 +0000 (0:00:00.504) 0:00:28.450 ********* 2025-07-12 15:16:45.870722 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-07-12 15:16:45.870733 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-07-12 15:16:45.870744 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-07-12 15:16:45.870754 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-07-12 15:16:45.870765 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-07-12 15:16:45.870775 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-07-12 15:16:45.870786 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-07-12 15:16:45.870796 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-07-12 15:16:45.870806 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-07-12 15:16:45.870817 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-07-12 15:16:45.870828 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-07-12 15:16:45.870838 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-07-12 15:16:45.870848 | orchestrator | 2025-07-12 15:16:45.870859 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-07-12 15:16:45.870869 | orchestrator | Saturday 12 July 2025 15:16:40 +0000 (0:00:03.706) 0:00:32.157 ********* 2025-07-12 15:16:45.870879 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:16:45.870890 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:16:45.870900 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:16:45.870911 | orchestrator | 2025-07-12 15:16:45.870921 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-12 15:16:45.870932 | orchestrator | 2025-07-12 15:16:45.870942 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-12 15:16:45.870953 | orchestrator | Saturday 12 July 2025 15:16:41 +0000 (0:00:01.248) 0:00:33.406 ********* 2025-07-12 15:16:45.870964 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:16:45.870974 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:16:45.870985 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:16:45.870995 | orchestrator | ok: [testbed-manager] 2025-07-12 15:16:45.871005 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:16:45.871015 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:16:45.871026 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:16:45.871036 | orchestrator | 2025-07-12 15:16:45.871047 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:16:45.871058 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:16:45.871069 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:16:45.871082 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:16:45.871093 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:16:45.871103 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 15:16:45.871114 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 15:16:45.871133 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 15:16:45.871143 | orchestrator | 2025-07-12 15:16:45.871154 | orchestrator | 2025-07-12 15:16:45.871164 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:16:45.871175 | orchestrator | Saturday 12 July 2025 15:16:45 +0000 (0:00:03.979) 0:00:37.386 ********* 2025-07-12 15:16:45.871185 | orchestrator | =============================================================================== 2025-07-12 15:16:45.871196 | orchestrator | osism.commons.repository : Update package cache ------------------------ 14.01s 2025-07-12 15:16:45.871206 | orchestrator | Install required packages (Debian) -------------------------------------- 7.38s 2025-07-12 15:16:45.871217 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.98s 2025-07-12 15:16:45.871227 | orchestrator | Copy fact files --------------------------------------------------------- 3.71s 2025-07-12 15:16:45.871238 | orchestrator | Create custom facts directory ------------------------------------------- 1.50s 2025-07-12 15:16:45.871248 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.25s 2025-07-12 15:16:45.871281 | orchestrator | Copy fact file ---------------------------------------------------------- 1.17s 2025-07-12 15:16:46.056125 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.05s 2025-07-12 15:16:46.056215 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.03s 2025-07-12 15:16:46.056227 | orchestrator | Create custom facts directory ------------------------------------------- 0.50s 2025-07-12 15:16:46.056239 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.48s 2025-07-12 15:16:46.056250 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.41s 2025-07-12 15:16:46.056261 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.20s 2025-07-12 15:16:46.056322 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.20s 2025-07-12 15:16:46.056334 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.13s 2025-07-12 15:16:46.056346 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2025-07-12 15:16:46.056357 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.10s 2025-07-12 15:16:46.056368 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2025-07-12 15:16:46.308681 | orchestrator | + osism apply bootstrap 2025-07-12 15:16:58.154224 | orchestrator | 2025-07-12 15:16:58 | INFO  | Task ad5c4667-4155-43bc-9eb6-7633ab6bb5ef (bootstrap) was prepared for execution. 2025-07-12 15:16:58.154401 | orchestrator | 2025-07-12 15:16:58 | INFO  | It takes a moment until task ad5c4667-4155-43bc-9eb6-7633ab6bb5ef (bootstrap) has been started and output is visible here. 2025-07-12 15:17:12.844661 | orchestrator | 2025-07-12 15:17:12.844788 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-07-12 15:17:12.844813 | orchestrator | 2025-07-12 15:17:12.844855 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-07-12 15:17:12.844882 | orchestrator | Saturday 12 July 2025 15:17:01 +0000 (0:00:00.120) 0:00:00.120 ********* 2025-07-12 15:17:12.844901 | orchestrator | ok: [testbed-manager] 2025-07-12 15:17:12.844920 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:17:12.844939 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:17:12.844958 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:17:12.844977 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:17:12.844996 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:17:12.845016 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:17:12.845035 | orchestrator | 2025-07-12 15:17:12.845055 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-12 15:17:12.845105 | orchestrator | 2025-07-12 15:17:12.845127 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-12 15:17:12.845147 | orchestrator | Saturday 12 July 2025 15:17:01 +0000 (0:00:00.160) 0:00:00.280 ********* 2025-07-12 15:17:12.845166 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:17:12.845189 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:17:12.845211 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:17:12.845233 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:17:12.845254 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:17:12.845322 | orchestrator | ok: [testbed-manager] 2025-07-12 15:17:12.845346 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:17:12.845369 | orchestrator | 2025-07-12 15:17:12.845390 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-07-12 15:17:12.845414 | orchestrator | 2025-07-12 15:17:12.845433 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-12 15:17:12.845454 | orchestrator | Saturday 12 July 2025 15:17:05 +0000 (0:00:03.626) 0:00:03.906 ********* 2025-07-12 15:17:12.845476 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-07-12 15:17:12.845498 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-07-12 15:17:12.845519 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-07-12 15:17:12.845540 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-12 15:17:12.845560 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-07-12 15:17:12.845579 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-12 15:17:12.845599 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-07-12 15:17:12.845619 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-07-12 15:17:12.845638 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-12 15:17:12.845656 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-07-12 15:17:12.845674 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-07-12 15:17:12.845692 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-07-12 15:17:12.845710 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-07-12 15:17:12.845729 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-07-12 15:17:12.845746 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-07-12 15:17:12.845764 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-07-12 15:17:12.845781 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-07-12 15:17:12.845798 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-07-12 15:17:12.845817 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:17:12.845836 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-07-12 15:17:12.845853 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-07-12 15:17:12.845870 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-07-12 15:17:12.845932 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-12 15:17:12.845970 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-07-12 15:17:12.846008 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-07-12 15:17:12.846084 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:17:12.846105 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-07-12 15:17:12.846119 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-07-12 15:17:12.846129 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-07-12 15:17:12.846139 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-07-12 15:17:12.846150 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-07-12 15:17:12.846161 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-12 15:17:12.846171 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-07-12 15:17:12.846197 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-07-12 15:17:12.846208 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-07-12 15:17:12.846218 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-07-12 15:17:12.846229 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-07-12 15:17:12.846239 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-07-12 15:17:12.846250 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-12 15:17:12.846261 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-07-12 15:17:12.846308 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:17:12.846322 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-07-12 15:17:12.846333 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-07-12 15:17:12.846343 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-07-12 15:17:12.846354 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 15:17:12.846364 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:17:12.846401 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-07-12 15:17:12.846413 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-07-12 15:17:12.846423 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:17:12.846434 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 15:17:12.846453 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-07-12 15:17:12.846464 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 15:17:12.846475 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:17:12.846486 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-07-12 15:17:12.846496 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-07-12 15:17:12.846507 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:17:12.846518 | orchestrator | 2025-07-12 15:17:12.846529 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-07-12 15:17:12.846540 | orchestrator | 2025-07-12 15:17:12.846551 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-07-12 15:17:12.846562 | orchestrator | Saturday 12 July 2025 15:17:05 +0000 (0:00:00.357) 0:00:04.264 ********* 2025-07-12 15:17:12.846573 | orchestrator | ok: [testbed-manager] 2025-07-12 15:17:12.846584 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:17:12.846594 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:17:12.846605 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:17:12.846616 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:17:12.846626 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:17:12.846637 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:17:12.846648 | orchestrator | 2025-07-12 15:17:12.846658 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-07-12 15:17:12.846685 | orchestrator | Saturday 12 July 2025 15:17:07 +0000 (0:00:01.246) 0:00:05.511 ********* 2025-07-12 15:17:12.846696 | orchestrator | ok: [testbed-manager] 2025-07-12 15:17:12.846718 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:17:12.846729 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:17:12.846740 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:17:12.846750 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:17:12.846760 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:17:12.846771 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:17:12.846781 | orchestrator | 2025-07-12 15:17:12.846792 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-07-12 15:17:12.846803 | orchestrator | Saturday 12 July 2025 15:17:08 +0000 (0:00:01.172) 0:00:06.684 ********* 2025-07-12 15:17:12.846815 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:17:12.846828 | orchestrator | 2025-07-12 15:17:12.846839 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-07-12 15:17:12.846861 | orchestrator | Saturday 12 July 2025 15:17:08 +0000 (0:00:00.227) 0:00:06.912 ********* 2025-07-12 15:17:12.846872 | orchestrator | changed: [testbed-manager] 2025-07-12 15:17:12.846882 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:17:12.846905 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:17:12.846925 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:17:12.846935 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:17:12.846946 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:17:12.846956 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:17:12.846967 | orchestrator | 2025-07-12 15:17:12.846978 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-07-12 15:17:12.846988 | orchestrator | Saturday 12 July 2025 15:17:10 +0000 (0:00:01.952) 0:00:08.864 ********* 2025-07-12 15:17:12.846999 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:17:12.847011 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:17:12.847024 | orchestrator | 2025-07-12 15:17:12.847035 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-07-12 15:17:12.847046 | orchestrator | Saturday 12 July 2025 15:17:10 +0000 (0:00:00.248) 0:00:09.112 ********* 2025-07-12 15:17:12.847057 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:17:12.847067 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:17:12.847078 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:17:12.847088 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:17:12.847099 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:17:12.847109 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:17:12.847120 | orchestrator | 2025-07-12 15:17:12.847131 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-07-12 15:17:12.847141 | orchestrator | Saturday 12 July 2025 15:17:11 +0000 (0:00:01.039) 0:00:10.152 ********* 2025-07-12 15:17:12.847152 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:17:12.847163 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:17:12.847173 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:17:12.847184 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:17:12.847195 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:17:12.847205 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:17:12.847216 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:17:12.847226 | orchestrator | 2025-07-12 15:17:12.847237 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-07-12 15:17:12.847248 | orchestrator | Saturday 12 July 2025 15:17:12 +0000 (0:00:00.587) 0:00:10.740 ********* 2025-07-12 15:17:12.847258 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:17:12.847269 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:17:12.847304 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:17:12.847315 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:17:12.847326 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:17:12.847336 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:17:12.847347 | orchestrator | ok: [testbed-manager] 2025-07-12 15:17:12.847357 | orchestrator | 2025-07-12 15:17:12.847368 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-07-12 15:17:12.847380 | orchestrator | Saturday 12 July 2025 15:17:12 +0000 (0:00:00.418) 0:00:11.158 ********* 2025-07-12 15:17:12.847391 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:17:12.847401 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:17:12.847419 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:17:24.623233 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:17:24.623403 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:17:24.623420 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:17:24.623433 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:17:24.623444 | orchestrator | 2025-07-12 15:17:24.623456 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-07-12 15:17:24.623491 | orchestrator | Saturday 12 July 2025 15:17:12 +0000 (0:00:00.205) 0:00:11.364 ********* 2025-07-12 15:17:24.623505 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:17:24.623535 | orchestrator | 2025-07-12 15:17:24.623546 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-07-12 15:17:24.623558 | orchestrator | Saturday 12 July 2025 15:17:13 +0000 (0:00:00.265) 0:00:11.629 ********* 2025-07-12 15:17:24.623569 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:17:24.623580 | orchestrator | 2025-07-12 15:17:24.623591 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-07-12 15:17:24.623601 | orchestrator | Saturday 12 July 2025 15:17:13 +0000 (0:00:00.301) 0:00:11.930 ********* 2025-07-12 15:17:24.623612 | orchestrator | ok: [testbed-manager] 2025-07-12 15:17:24.623624 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:17:24.623634 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:17:24.623645 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:17:24.623655 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:17:24.623666 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:17:24.623677 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:17:24.623687 | orchestrator | 2025-07-12 15:17:24.623699 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-07-12 15:17:24.623710 | orchestrator | Saturday 12 July 2025 15:17:14 +0000 (0:00:01.408) 0:00:13.339 ********* 2025-07-12 15:17:24.623721 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:17:24.623732 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:17:24.623743 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:17:24.623756 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:17:24.623768 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:17:24.623781 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:17:24.623793 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:17:24.623804 | orchestrator | 2025-07-12 15:17:24.623817 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-07-12 15:17:24.623829 | orchestrator | Saturday 12 July 2025 15:17:15 +0000 (0:00:00.197) 0:00:13.537 ********* 2025-07-12 15:17:24.623840 | orchestrator | ok: [testbed-manager] 2025-07-12 15:17:24.623853 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:17:24.623864 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:17:24.623876 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:17:24.623887 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:17:24.623897 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:17:24.623908 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:17:24.623918 | orchestrator | 2025-07-12 15:17:24.623929 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-07-12 15:17:24.623940 | orchestrator | Saturday 12 July 2025 15:17:15 +0000 (0:00:00.544) 0:00:14.081 ********* 2025-07-12 15:17:24.623992 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:17:24.624004 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:17:24.624015 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:17:24.624026 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:17:24.624036 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:17:24.624046 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:17:24.624057 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:17:24.624067 | orchestrator | 2025-07-12 15:17:24.624078 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-07-12 15:17:24.624090 | orchestrator | Saturday 12 July 2025 15:17:15 +0000 (0:00:00.235) 0:00:14.317 ********* 2025-07-12 15:17:24.624101 | orchestrator | ok: [testbed-manager] 2025-07-12 15:17:24.624121 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:17:24.624132 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:17:24.624142 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:17:24.624153 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:17:24.624163 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:17:24.624174 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:17:24.624184 | orchestrator | 2025-07-12 15:17:24.624195 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-07-12 15:17:24.624205 | orchestrator | Saturday 12 July 2025 15:17:16 +0000 (0:00:00.543) 0:00:14.860 ********* 2025-07-12 15:17:24.624216 | orchestrator | ok: [testbed-manager] 2025-07-12 15:17:24.624226 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:17:24.624237 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:17:24.624247 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:17:24.624257 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:17:24.624268 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:17:24.624311 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:17:24.624331 | orchestrator | 2025-07-12 15:17:24.624350 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-07-12 15:17:24.624369 | orchestrator | Saturday 12 July 2025 15:17:17 +0000 (0:00:01.089) 0:00:15.950 ********* 2025-07-12 15:17:24.624381 | orchestrator | ok: [testbed-manager] 2025-07-12 15:17:24.624391 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:17:24.624402 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:17:24.624412 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:17:24.624423 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:17:24.624433 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:17:24.624443 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:17:24.624454 | orchestrator | 2025-07-12 15:17:24.624464 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-07-12 15:17:24.624475 | orchestrator | Saturday 12 July 2025 15:17:18 +0000 (0:00:01.097) 0:00:17.047 ********* 2025-07-12 15:17:24.624511 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:17:24.624523 | orchestrator | 2025-07-12 15:17:24.624534 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-07-12 15:17:24.624545 | orchestrator | Saturday 12 July 2025 15:17:18 +0000 (0:00:00.295) 0:00:17.342 ********* 2025-07-12 15:17:24.624555 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:17:24.624565 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:17:24.624576 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:17:24.624586 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:17:24.624596 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:17:24.624607 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:17:24.624617 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:17:24.624627 | orchestrator | 2025-07-12 15:17:24.624638 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-07-12 15:17:24.624648 | orchestrator | Saturday 12 July 2025 15:17:20 +0000 (0:00:01.421) 0:00:18.764 ********* 2025-07-12 15:17:24.624659 | orchestrator | ok: [testbed-manager] 2025-07-12 15:17:24.624669 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:17:24.624680 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:17:24.624690 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:17:24.624701 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:17:24.624711 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:17:24.624721 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:17:24.624731 | orchestrator | 2025-07-12 15:17:24.624742 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-07-12 15:17:24.624753 | orchestrator | Saturday 12 July 2025 15:17:20 +0000 (0:00:00.210) 0:00:18.974 ********* 2025-07-12 15:17:24.624763 | orchestrator | ok: [testbed-manager] 2025-07-12 15:17:24.624774 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:17:24.624793 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:17:24.624803 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:17:24.624814 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:17:24.624824 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:17:24.624834 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:17:24.624845 | orchestrator | 2025-07-12 15:17:24.624855 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-07-12 15:17:24.624866 | orchestrator | Saturday 12 July 2025 15:17:20 +0000 (0:00:00.201) 0:00:19.176 ********* 2025-07-12 15:17:24.624876 | orchestrator | ok: [testbed-manager] 2025-07-12 15:17:24.624887 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:17:24.624897 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:17:24.624907 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:17:24.624917 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:17:24.624928 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:17:24.624938 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:17:24.624949 | orchestrator | 2025-07-12 15:17:24.624959 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-07-12 15:17:24.624970 | orchestrator | Saturday 12 July 2025 15:17:20 +0000 (0:00:00.197) 0:00:19.373 ********* 2025-07-12 15:17:24.624982 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:17:24.624994 | orchestrator | 2025-07-12 15:17:24.625004 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-07-12 15:17:24.625015 | orchestrator | Saturday 12 July 2025 15:17:21 +0000 (0:00:00.252) 0:00:19.626 ********* 2025-07-12 15:17:24.625026 | orchestrator | ok: [testbed-manager] 2025-07-12 15:17:24.625036 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:17:24.625046 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:17:24.625056 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:17:24.625067 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:17:24.625077 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:17:24.625087 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:17:24.625098 | orchestrator | 2025-07-12 15:17:24.625108 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-07-12 15:17:24.625119 | orchestrator | Saturday 12 July 2025 15:17:21 +0000 (0:00:00.545) 0:00:20.171 ********* 2025-07-12 15:17:24.625129 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:17:24.625139 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:17:24.625150 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:17:24.625161 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:17:24.625171 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:17:24.625181 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:17:24.625192 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:17:24.625202 | orchestrator | 2025-07-12 15:17:24.625492 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-07-12 15:17:24.625505 | orchestrator | Saturday 12 July 2025 15:17:21 +0000 (0:00:00.207) 0:00:20.378 ********* 2025-07-12 15:17:24.625516 | orchestrator | ok: [testbed-manager] 2025-07-12 15:17:24.625526 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:17:24.625537 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:17:24.625547 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:17:24.625558 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:17:24.625569 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:17:24.625579 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:17:24.625590 | orchestrator | 2025-07-12 15:17:24.625601 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-07-12 15:17:24.625612 | orchestrator | Saturday 12 July 2025 15:17:22 +0000 (0:00:00.981) 0:00:21.360 ********* 2025-07-12 15:17:24.625623 | orchestrator | ok: [testbed-manager] 2025-07-12 15:17:24.625633 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:17:24.625644 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:17:24.625654 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:17:24.625665 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:17:24.625684 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:17:24.625695 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:17:24.625705 | orchestrator | 2025-07-12 15:17:24.625716 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-07-12 15:17:24.625727 | orchestrator | Saturday 12 July 2025 15:17:23 +0000 (0:00:00.564) 0:00:21.924 ********* 2025-07-12 15:17:24.625737 | orchestrator | ok: [testbed-manager] 2025-07-12 15:17:24.625748 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:17:24.625759 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:17:24.625769 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:17:24.625789 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:18:01.817666 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:18:01.817782 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:18:01.817797 | orchestrator | 2025-07-12 15:18:01.817826 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-07-12 15:18:01.817839 | orchestrator | Saturday 12 July 2025 15:17:24 +0000 (0:00:01.130) 0:00:23.055 ********* 2025-07-12 15:18:01.817850 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:18:01.817861 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:18:01.817872 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:18:01.817882 | orchestrator | changed: [testbed-manager] 2025-07-12 15:18:01.817894 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:18:01.817905 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:18:01.817915 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:18:01.817926 | orchestrator | 2025-07-12 15:18:01.817937 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-07-12 15:18:01.817949 | orchestrator | Saturday 12 July 2025 15:17:39 +0000 (0:00:14.515) 0:00:37.570 ********* 2025-07-12 15:18:01.817959 | orchestrator | ok: [testbed-manager] 2025-07-12 15:18:01.817970 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:18:01.817981 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:18:01.817991 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:18:01.818002 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:18:01.818012 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:18:01.818096 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:18:01.818108 | orchestrator | 2025-07-12 15:18:01.818119 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-07-12 15:18:01.818130 | orchestrator | Saturday 12 July 2025 15:17:39 +0000 (0:00:00.208) 0:00:37.778 ********* 2025-07-12 15:18:01.818140 | orchestrator | ok: [testbed-manager] 2025-07-12 15:18:01.818151 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:18:01.818162 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:18:01.818173 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:18:01.818183 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:18:01.818194 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:18:01.818207 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:18:01.818219 | orchestrator | 2025-07-12 15:18:01.818232 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-07-12 15:18:01.818245 | orchestrator | Saturday 12 July 2025 15:17:39 +0000 (0:00:00.216) 0:00:37.995 ********* 2025-07-12 15:18:01.818257 | orchestrator | ok: [testbed-manager] 2025-07-12 15:18:01.818269 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:18:01.818307 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:18:01.818327 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:18:01.818348 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:18:01.818366 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:18:01.818383 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:18:01.818396 | orchestrator | 2025-07-12 15:18:01.818408 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-07-12 15:18:01.818420 | orchestrator | Saturday 12 July 2025 15:17:39 +0000 (0:00:00.221) 0:00:38.216 ********* 2025-07-12 15:18:01.818435 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:18:01.818474 | orchestrator | 2025-07-12 15:18:01.818488 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-07-12 15:18:01.818501 | orchestrator | Saturday 12 July 2025 15:17:40 +0000 (0:00:00.245) 0:00:38.462 ********* 2025-07-12 15:18:01.818513 | orchestrator | ok: [testbed-manager] 2025-07-12 15:18:01.818525 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:18:01.818537 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:18:01.818549 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:18:01.818561 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:18:01.818571 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:18:01.818581 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:18:01.818592 | orchestrator | 2025-07-12 15:18:01.818603 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-07-12 15:18:01.818614 | orchestrator | Saturday 12 July 2025 15:17:41 +0000 (0:00:01.639) 0:00:40.102 ********* 2025-07-12 15:18:01.818624 | orchestrator | changed: [testbed-manager] 2025-07-12 15:18:01.818635 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:18:01.818645 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:18:01.818656 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:18:01.818666 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:18:01.818677 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:18:01.818687 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:18:01.818698 | orchestrator | 2025-07-12 15:18:01.818708 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-07-12 15:18:01.818719 | orchestrator | Saturday 12 July 2025 15:17:42 +0000 (0:00:01.148) 0:00:41.251 ********* 2025-07-12 15:18:01.818730 | orchestrator | ok: [testbed-manager] 2025-07-12 15:18:01.818740 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:18:01.818772 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:18:01.818784 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:18:01.818794 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:18:01.818805 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:18:01.818815 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:18:01.818825 | orchestrator | 2025-07-12 15:18:01.818836 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-07-12 15:18:01.818847 | orchestrator | Saturday 12 July 2025 15:17:43 +0000 (0:00:00.819) 0:00:42.070 ********* 2025-07-12 15:18:01.818859 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:18:01.818872 | orchestrator | 2025-07-12 15:18:01.818883 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-07-12 15:18:01.818894 | orchestrator | Saturday 12 July 2025 15:17:43 +0000 (0:00:00.286) 0:00:42.357 ********* 2025-07-12 15:18:01.818905 | orchestrator | changed: [testbed-manager] 2025-07-12 15:18:01.818915 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:18:01.818926 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:18:01.818936 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:18:01.818947 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:18:01.818958 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:18:01.818968 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:18:01.818979 | orchestrator | 2025-07-12 15:18:01.819010 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-07-12 15:18:01.819022 | orchestrator | Saturday 12 July 2025 15:17:44 +0000 (0:00:00.974) 0:00:43.332 ********* 2025-07-12 15:18:01.819033 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:18:01.819044 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:18:01.819054 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:18:01.819065 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:18:01.819076 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:18:01.819087 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:18:01.819098 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:18:01.819109 | orchestrator | 2025-07-12 15:18:01.819120 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-07-12 15:18:01.819139 | orchestrator | Saturday 12 July 2025 15:17:45 +0000 (0:00:00.284) 0:00:43.616 ********* 2025-07-12 15:18:01.819150 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:18:01.819161 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:18:01.819171 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:18:01.819182 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:18:01.819192 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:18:01.819203 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:18:01.819213 | orchestrator | changed: [testbed-manager] 2025-07-12 15:18:01.819224 | orchestrator | 2025-07-12 15:18:01.819235 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-07-12 15:18:01.819246 | orchestrator | Saturday 12 July 2025 15:17:56 +0000 (0:00:10.925) 0:00:54.541 ********* 2025-07-12 15:18:01.819257 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:18:01.819267 | orchestrator | ok: [testbed-manager] 2025-07-12 15:18:01.819300 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:18:01.819313 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:18:01.819324 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:18:01.819334 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:18:01.819344 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:18:01.819355 | orchestrator | 2025-07-12 15:18:01.819365 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-07-12 15:18:01.819376 | orchestrator | Saturday 12 July 2025 15:17:57 +0000 (0:00:01.563) 0:00:56.105 ********* 2025-07-12 15:18:01.819386 | orchestrator | ok: [testbed-manager] 2025-07-12 15:18:01.819397 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:18:01.819407 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:18:01.819417 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:18:01.819428 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:18:01.819438 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:18:01.819448 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:18:01.819459 | orchestrator | 2025-07-12 15:18:01.819469 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-07-12 15:18:01.819480 | orchestrator | Saturday 12 July 2025 15:17:58 +0000 (0:00:00.916) 0:00:57.021 ********* 2025-07-12 15:18:01.819490 | orchestrator | ok: [testbed-manager] 2025-07-12 15:18:01.819510 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:18:01.819521 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:18:01.819531 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:18:01.819542 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:18:01.819552 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:18:01.819562 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:18:01.819573 | orchestrator | 2025-07-12 15:18:01.819583 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-07-12 15:18:01.819594 | orchestrator | Saturday 12 July 2025 15:17:58 +0000 (0:00:00.221) 0:00:57.243 ********* 2025-07-12 15:18:01.819605 | orchestrator | ok: [testbed-manager] 2025-07-12 15:18:01.819615 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:18:01.819626 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:18:01.819636 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:18:01.819646 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:18:01.819656 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:18:01.819667 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:18:01.819677 | orchestrator | 2025-07-12 15:18:01.819688 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-07-12 15:18:01.819699 | orchestrator | Saturday 12 July 2025 15:17:58 +0000 (0:00:00.192) 0:00:57.435 ********* 2025-07-12 15:18:01.819709 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:18:01.819721 | orchestrator | 2025-07-12 15:18:01.819731 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-07-12 15:18:01.819742 | orchestrator | Saturday 12 July 2025 15:17:59 +0000 (0:00:00.267) 0:00:57.703 ********* 2025-07-12 15:18:01.819761 | orchestrator | ok: [testbed-manager] 2025-07-12 15:18:01.819772 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:18:01.819782 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:18:01.819792 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:18:01.819802 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:18:01.819813 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:18:01.819823 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:18:01.819833 | orchestrator | 2025-07-12 15:18:01.819844 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-07-12 15:18:01.819855 | orchestrator | Saturday 12 July 2025 15:18:01 +0000 (0:00:01.765) 0:00:59.468 ********* 2025-07-12 15:18:01.819865 | orchestrator | changed: [testbed-manager] 2025-07-12 15:18:01.819876 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:18:01.819886 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:18:01.819896 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:18:01.819907 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:18:01.819917 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:18:01.819927 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:18:01.819938 | orchestrator | 2025-07-12 15:18:01.819948 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-07-12 15:18:01.819959 | orchestrator | Saturday 12 July 2025 15:18:01 +0000 (0:00:00.559) 0:01:00.028 ********* 2025-07-12 15:18:01.819969 | orchestrator | ok: [testbed-manager] 2025-07-12 15:18:01.819980 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:18:01.819990 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:18:01.820001 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:18:01.820012 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:18:01.820022 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:18:01.820032 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:18:01.820043 | orchestrator | 2025-07-12 15:18:01.820054 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-07-12 15:18:01.820077 | orchestrator | Saturday 12 July 2025 15:18:01 +0000 (0:00:00.226) 0:01:00.255 ********* 2025-07-12 15:20:21.057009 | orchestrator | ok: [testbed-manager] 2025-07-12 15:20:21.057150 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:20:21.057167 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:20:21.057178 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:20:21.057189 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:20:21.057200 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:20:21.057211 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:20:21.057222 | orchestrator | 2025-07-12 15:20:21.057234 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-07-12 15:20:21.057247 | orchestrator | Saturday 12 July 2025 15:18:03 +0000 (0:00:01.485) 0:01:01.741 ********* 2025-07-12 15:20:21.057258 | orchestrator | changed: [testbed-manager] 2025-07-12 15:20:21.057269 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:20:21.057280 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:20:21.057290 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:20:21.057301 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:20:21.057352 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:20:21.057365 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:20:21.057376 | orchestrator | 2025-07-12 15:20:21.057387 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-07-12 15:20:21.057398 | orchestrator | Saturday 12 July 2025 15:18:05 +0000 (0:00:01.886) 0:01:03.627 ********* 2025-07-12 15:20:21.057408 | orchestrator | ok: [testbed-manager] 2025-07-12 15:20:21.057419 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:20:21.057430 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:20:21.057441 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:20:21.057451 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:20:21.057462 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:20:21.057473 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:20:21.057484 | orchestrator | 2025-07-12 15:20:21.057494 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-07-12 15:20:21.057524 | orchestrator | Saturday 12 July 2025 15:18:07 +0000 (0:00:02.478) 0:01:06.106 ********* 2025-07-12 15:20:21.057536 | orchestrator | ok: [testbed-manager] 2025-07-12 15:20:21.057546 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:20:21.057557 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:20:21.057569 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:20:21.057582 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:20:21.057594 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:20:21.057606 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:20:21.057618 | orchestrator | 2025-07-12 15:20:21.057630 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-07-12 15:20:21.057642 | orchestrator | Saturday 12 July 2025 15:18:45 +0000 (0:00:37.694) 0:01:43.800 ********* 2025-07-12 15:20:21.057661 | orchestrator | changed: [testbed-manager] 2025-07-12 15:20:21.057683 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:20:21.057705 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:20:21.057718 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:20:21.057730 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:20:21.057742 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:20:21.057755 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:20:21.057767 | orchestrator | 2025-07-12 15:20:21.057780 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-07-12 15:20:21.057792 | orchestrator | Saturday 12 July 2025 15:20:01 +0000 (0:01:16.055) 0:02:59.856 ********* 2025-07-12 15:20:21.057804 | orchestrator | ok: [testbed-manager] 2025-07-12 15:20:21.057817 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:20:21.057829 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:20:21.057841 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:20:21.057853 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:20:21.057864 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:20:21.057876 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:20:21.057888 | orchestrator | 2025-07-12 15:20:21.057901 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-07-12 15:20:21.057914 | orchestrator | Saturday 12 July 2025 15:20:03 +0000 (0:00:01.833) 0:03:01.689 ********* 2025-07-12 15:20:21.057925 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:20:21.057936 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:20:21.057946 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:20:21.057957 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:20:21.057968 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:20:21.057978 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:20:21.057989 | orchestrator | changed: [testbed-manager] 2025-07-12 15:20:21.057999 | orchestrator | 2025-07-12 15:20:21.058010 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-07-12 15:20:21.058072 | orchestrator | Saturday 12 July 2025 15:20:14 +0000 (0:00:11.136) 0:03:12.826 ********* 2025-07-12 15:20:21.058093 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-07-12 15:20:21.058115 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-07-12 15:20:21.058157 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-07-12 15:20:21.058180 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-07-12 15:20:21.058191 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-07-12 15:20:21.058202 | orchestrator | 2025-07-12 15:20:21.058214 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-07-12 15:20:21.058225 | orchestrator | Saturday 12 July 2025 15:20:14 +0000 (0:00:00.366) 0:03:13.192 ********* 2025-07-12 15:20:21.058236 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-07-12 15:20:21.058247 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:20:21.058258 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-07-12 15:20:21.058269 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:20:21.058280 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-07-12 15:20:21.058290 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:20:21.058301 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-07-12 15:20:21.058352 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:20:21.058372 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-12 15:20:21.058383 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-12 15:20:21.058394 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-12 15:20:21.058404 | orchestrator | 2025-07-12 15:20:21.058415 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-07-12 15:20:21.058425 | orchestrator | Saturday 12 July 2025 15:20:16 +0000 (0:00:01.628) 0:03:14.820 ********* 2025-07-12 15:20:21.058436 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-07-12 15:20:21.058447 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-07-12 15:20:21.058458 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-07-12 15:20:21.058468 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-07-12 15:20:21.058479 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-07-12 15:20:21.058489 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-07-12 15:20:21.058500 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-07-12 15:20:21.058510 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-07-12 15:20:21.058521 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-07-12 15:20:21.058531 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-07-12 15:20:21.058542 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:20:21.058553 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-07-12 15:20:21.058570 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-07-12 15:20:21.058581 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-07-12 15:20:21.058592 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-07-12 15:20:21.058603 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-07-12 15:20:21.058613 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-07-12 15:20:21.058624 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-07-12 15:20:21.058634 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-07-12 15:20:21.058645 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-07-12 15:20:21.058655 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-07-12 15:20:21.058666 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-07-12 15:20:21.058684 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-07-12 15:20:24.747847 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:20:24.747935 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-07-12 15:20:24.747951 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-07-12 15:20:24.747963 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-07-12 15:20:24.747974 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-07-12 15:20:24.747985 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-07-12 15:20:24.747996 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-07-12 15:20:24.748007 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-07-12 15:20:24.748018 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-07-12 15:20:24.748029 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:20:24.748040 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-07-12 15:20:24.748051 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-07-12 15:20:24.748062 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-07-12 15:20:24.748073 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-07-12 15:20:24.748084 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-07-12 15:20:24.748095 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-07-12 15:20:24.748106 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-07-12 15:20:24.748117 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-07-12 15:20:24.748127 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-07-12 15:20:24.748138 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-07-12 15:20:24.748150 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:20:24.748161 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-07-12 15:20:24.748172 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-07-12 15:20:24.748202 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-07-12 15:20:24.748213 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-07-12 15:20:24.748224 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-07-12 15:20:24.748235 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-07-12 15:20:24.748246 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-07-12 15:20:24.748271 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-07-12 15:20:24.748282 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-07-12 15:20:24.748293 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-07-12 15:20:24.748304 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-07-12 15:20:24.748441 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-07-12 15:20:24.748457 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-07-12 15:20:24.748469 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-07-12 15:20:24.748482 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-07-12 15:20:24.748495 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-07-12 15:20:24.748507 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-07-12 15:20:24.748519 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-07-12 15:20:24.748531 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-07-12 15:20:24.748543 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-07-12 15:20:24.748555 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-07-12 15:20:24.748567 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-07-12 15:20:24.748603 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-07-12 15:20:24.748617 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-07-12 15:20:24.748629 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-07-12 15:20:24.748674 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-07-12 15:20:24.748687 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-07-12 15:20:24.748700 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-07-12 15:20:24.748712 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-07-12 15:20:24.748724 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-07-12 15:20:24.748737 | orchestrator | 2025-07-12 15:20:24.748750 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-07-12 15:20:24.748762 | orchestrator | Saturday 12 July 2025 15:20:21 +0000 (0:00:04.673) 0:03:19.494 ********* 2025-07-12 15:20:24.748772 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-12 15:20:24.748783 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-12 15:20:24.748804 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-12 15:20:24.748815 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-12 15:20:24.748826 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-12 15:20:24.748836 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-12 15:20:24.748847 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-12 15:20:24.748858 | orchestrator | 2025-07-12 15:20:24.748869 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-07-12 15:20:24.748880 | orchestrator | Saturday 12 July 2025 15:20:22 +0000 (0:00:01.360) 0:03:20.854 ********* 2025-07-12 15:20:24.748890 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-07-12 15:20:24.748902 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:20:24.748912 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-07-12 15:20:24.748933 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-07-12 15:20:24.748954 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:20:24.748975 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-07-12 15:20:24.748990 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:20:24.749001 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:20:24.749012 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-07-12 15:20:24.749022 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-07-12 15:20:24.749033 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-07-12 15:20:24.749044 | orchestrator | 2025-07-12 15:20:24.749054 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-07-12 15:20:24.749065 | orchestrator | Saturday 12 July 2025 15:20:22 +0000 (0:00:00.532) 0:03:21.387 ********* 2025-07-12 15:20:24.749076 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-07-12 15:20:24.749086 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:20:24.749097 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-07-12 15:20:24.749108 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-07-12 15:20:24.749125 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:20:24.749144 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-07-12 15:20:24.749161 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:20:24.749176 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:20:24.749194 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-07-12 15:20:24.749205 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-07-12 15:20:24.749216 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-07-12 15:20:24.749226 | orchestrator | 2025-07-12 15:20:24.749237 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-07-12 15:20:24.749248 | orchestrator | Saturday 12 July 2025 15:20:24 +0000 (0:00:01.586) 0:03:22.973 ********* 2025-07-12 15:20:24.749258 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:20:24.749269 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:20:24.749280 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:20:24.749290 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:20:24.749301 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:20:24.749345 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:20:24.749365 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:20:24.749381 | orchestrator | 2025-07-12 15:20:24.749415 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-07-12 15:20:36.243932 | orchestrator | Saturday 12 July 2025 15:20:24 +0000 (0:00:00.221) 0:03:23.195 ********* 2025-07-12 15:20:36.244041 | orchestrator | ok: [testbed-manager] 2025-07-12 15:20:36.244057 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:20:36.244068 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:20:36.244079 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:20:36.244090 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:20:36.244101 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:20:36.244111 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:20:36.244122 | orchestrator | 2025-07-12 15:20:36.244134 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-07-12 15:20:36.244145 | orchestrator | Saturday 12 July 2025 15:20:30 +0000 (0:00:05.749) 0:03:28.944 ********* 2025-07-12 15:20:36.244156 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-07-12 15:20:36.244167 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:20:36.244178 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-07-12 15:20:36.244189 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:20:36.244199 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-07-12 15:20:36.244210 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:20:36.244220 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-07-12 15:20:36.244231 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-07-12 15:20:36.244242 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:20:36.244252 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:20:36.244263 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-07-12 15:20:36.244273 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:20:36.244284 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-07-12 15:20:36.244294 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:20:36.244305 | orchestrator | 2025-07-12 15:20:36.244344 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-07-12 15:20:36.244356 | orchestrator | Saturday 12 July 2025 15:20:30 +0000 (0:00:00.283) 0:03:29.228 ********* 2025-07-12 15:20:36.244367 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-07-12 15:20:36.244378 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-07-12 15:20:36.244389 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-07-12 15:20:36.244400 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-07-12 15:20:36.244410 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-07-12 15:20:36.244421 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-07-12 15:20:36.244431 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-07-12 15:20:36.244442 | orchestrator | 2025-07-12 15:20:36.244453 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-07-12 15:20:36.244464 | orchestrator | Saturday 12 July 2025 15:20:31 +0000 (0:00:00.983) 0:03:30.212 ********* 2025-07-12 15:20:36.244479 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:20:36.244494 | orchestrator | 2025-07-12 15:20:36.244506 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-07-12 15:20:36.244519 | orchestrator | Saturday 12 July 2025 15:20:32 +0000 (0:00:00.372) 0:03:30.584 ********* 2025-07-12 15:20:36.244530 | orchestrator | ok: [testbed-manager] 2025-07-12 15:20:36.244543 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:20:36.244554 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:20:36.244566 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:20:36.244579 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:20:36.244591 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:20:36.244603 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:20:36.244642 | orchestrator | 2025-07-12 15:20:36.244654 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-07-12 15:20:36.244665 | orchestrator | Saturday 12 July 2025 15:20:33 +0000 (0:00:01.275) 0:03:31.860 ********* 2025-07-12 15:20:36.244676 | orchestrator | ok: [testbed-manager] 2025-07-12 15:20:36.244686 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:20:36.244697 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:20:36.244707 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:20:36.244718 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:20:36.244728 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:20:36.244738 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:20:36.244749 | orchestrator | 2025-07-12 15:20:36.244759 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-07-12 15:20:36.244770 | orchestrator | Saturday 12 July 2025 15:20:34 +0000 (0:00:00.615) 0:03:32.475 ********* 2025-07-12 15:20:36.244780 | orchestrator | changed: [testbed-manager] 2025-07-12 15:20:36.244791 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:20:36.244802 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:20:36.244812 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:20:36.244823 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:20:36.244833 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:20:36.244844 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:20:36.244854 | orchestrator | 2025-07-12 15:20:36.244865 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-07-12 15:20:36.244876 | orchestrator | Saturday 12 July 2025 15:20:34 +0000 (0:00:00.633) 0:03:33.109 ********* 2025-07-12 15:20:36.244887 | orchestrator | ok: [testbed-manager] 2025-07-12 15:20:36.244897 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:20:36.244908 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:20:36.244918 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:20:36.244929 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:20:36.244940 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:20:36.244950 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:20:36.244960 | orchestrator | 2025-07-12 15:20:36.244971 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-07-12 15:20:36.244982 | orchestrator | Saturday 12 July 2025 15:20:35 +0000 (0:00:00.597) 0:03:33.707 ********* 2025-07-12 15:20:36.245030 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1752332085.1541634, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 15:20:36.245047 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1752332145.1606991, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 15:20:36.245059 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1752332159.9177883, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 15:20:36.245080 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1752332145.8967452, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 15:20:36.245091 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1752332153.7842727, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 15:20:36.245103 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1752332142.3794513, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 15:20:36.245114 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1752332149.255103, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 15:20:36.245133 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1752332108.0211632, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 15:21:00.181034 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1752332038.777467, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 15:21:00.181177 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1752332034.86099, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 15:21:00.181218 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1752332046.0830207, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 15:21:00.181232 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1752332052.4559062, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 15:21:00.181245 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1752332039.1272578, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 15:21:00.181257 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1752332041.3925185, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 15:21:00.181270 | orchestrator | 2025-07-12 15:21:00.181284 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-07-12 15:21:00.181298 | orchestrator | Saturday 12 July 2025 15:20:36 +0000 (0:00:00.974) 0:03:34.681 ********* 2025-07-12 15:21:00.181313 | orchestrator | changed: [testbed-manager] 2025-07-12 15:21:00.181392 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:21:00.181410 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:21:00.181429 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:21:00.181448 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:21:00.181467 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:21:00.181485 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:21:00.181503 | orchestrator | 2025-07-12 15:21:00.181522 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-07-12 15:21:00.181541 | orchestrator | Saturday 12 July 2025 15:20:37 +0000 (0:00:01.169) 0:03:35.851 ********* 2025-07-12 15:21:00.181561 | orchestrator | changed: [testbed-manager] 2025-07-12 15:21:00.181590 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:21:00.181603 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:21:00.181616 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:21:00.181654 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:21:00.181673 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:21:00.181691 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:21:00.181709 | orchestrator | 2025-07-12 15:21:00.181727 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-07-12 15:21:00.181744 | orchestrator | Saturday 12 July 2025 15:20:38 +0000 (0:00:01.139) 0:03:36.990 ********* 2025-07-12 15:21:00.181763 | orchestrator | changed: [testbed-manager] 2025-07-12 15:21:00.181800 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:21:00.181819 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:21:00.181837 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:21:00.181855 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:21:00.181875 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:21:00.181893 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:21:00.181912 | orchestrator | 2025-07-12 15:21:00.181931 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-07-12 15:21:00.181949 | orchestrator | Saturday 12 July 2025 15:20:39 +0000 (0:00:01.106) 0:03:38.096 ********* 2025-07-12 15:21:00.181967 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:21:00.181985 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:21:00.182003 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:21:00.182095 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:21:00.182120 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:21:00.182139 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:21:00.182159 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:21:00.182172 | orchestrator | 2025-07-12 15:21:00.182183 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-07-12 15:21:00.182193 | orchestrator | Saturday 12 July 2025 15:20:39 +0000 (0:00:00.246) 0:03:38.343 ********* 2025-07-12 15:21:00.182204 | orchestrator | ok: [testbed-manager] 2025-07-12 15:21:00.182216 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:21:00.182227 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:21:00.182237 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:21:00.182248 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:21:00.182258 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:21:00.182269 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:21:00.182279 | orchestrator | 2025-07-12 15:21:00.182290 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-07-12 15:21:00.182300 | orchestrator | Saturday 12 July 2025 15:20:40 +0000 (0:00:00.708) 0:03:39.051 ********* 2025-07-12 15:21:00.182313 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:21:00.182356 | orchestrator | 2025-07-12 15:21:00.182368 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-07-12 15:21:00.182379 | orchestrator | Saturday 12 July 2025 15:20:40 +0000 (0:00:00.359) 0:03:39.411 ********* 2025-07-12 15:21:00.182390 | orchestrator | ok: [testbed-manager] 2025-07-12 15:21:00.182401 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:21:00.182412 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:21:00.182422 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:21:00.182433 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:21:00.182444 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:21:00.182454 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:21:00.182468 | orchestrator | 2025-07-12 15:21:00.182487 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-07-12 15:21:00.182504 | orchestrator | Saturday 12 July 2025 15:20:48 +0000 (0:00:07.707) 0:03:47.119 ********* 2025-07-12 15:21:00.182524 | orchestrator | ok: [testbed-manager] 2025-07-12 15:21:00.182542 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:21:00.182560 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:21:00.182578 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:21:00.182596 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:21:00.182614 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:21:00.182632 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:21:00.182651 | orchestrator | 2025-07-12 15:21:00.182669 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-07-12 15:21:00.182688 | orchestrator | Saturday 12 July 2025 15:20:49 +0000 (0:00:01.189) 0:03:48.309 ********* 2025-07-12 15:21:00.182706 | orchestrator | ok: [testbed-manager] 2025-07-12 15:21:00.182725 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:21:00.182756 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:21:00.182776 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:21:00.182794 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:21:00.182812 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:21:00.182830 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:21:00.182850 | orchestrator | 2025-07-12 15:21:00.182868 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-07-12 15:21:00.182886 | orchestrator | Saturday 12 July 2025 15:20:50 +0000 (0:00:01.097) 0:03:49.407 ********* 2025-07-12 15:21:00.182905 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:21:00.182925 | orchestrator | 2025-07-12 15:21:00.182944 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-07-12 15:21:00.182963 | orchestrator | Saturday 12 July 2025 15:20:51 +0000 (0:00:00.453) 0:03:49.861 ********* 2025-07-12 15:21:00.182981 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:21:00.182999 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:21:00.183017 | orchestrator | changed: [testbed-manager] 2025-07-12 15:21:00.183037 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:21:00.183056 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:21:00.183074 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:21:00.183093 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:21:00.183111 | orchestrator | 2025-07-12 15:21:00.183130 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-07-12 15:21:00.183149 | orchestrator | Saturday 12 July 2025 15:20:59 +0000 (0:00:08.119) 0:03:57.980 ********* 2025-07-12 15:21:00.183168 | orchestrator | changed: [testbed-manager] 2025-07-12 15:21:00.183196 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:21:00.183216 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:21:00.183235 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:22:06.953826 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:22:06.953945 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:22:06.953962 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:22:06.953974 | orchestrator | 2025-07-12 15:22:06.953986 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-07-12 15:22:06.953999 | orchestrator | Saturday 12 July 2025 15:21:00 +0000 (0:00:00.641) 0:03:58.621 ********* 2025-07-12 15:22:06.954011 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:22:06.954077 | orchestrator | changed: [testbed-manager] 2025-07-12 15:22:06.954089 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:22:06.954100 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:22:06.954111 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:22:06.954122 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:22:06.954136 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:22:06.954154 | orchestrator | 2025-07-12 15:22:06.954169 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-07-12 15:22:06.954180 | orchestrator | Saturday 12 July 2025 15:21:01 +0000 (0:00:01.097) 0:03:59.718 ********* 2025-07-12 15:22:06.954191 | orchestrator | changed: [testbed-manager] 2025-07-12 15:22:06.954202 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:22:06.954213 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:22:06.954224 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:22:06.954236 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:22:06.954247 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:22:06.954257 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:22:06.954268 | orchestrator | 2025-07-12 15:22:06.954279 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-07-12 15:22:06.954290 | orchestrator | Saturday 12 July 2025 15:21:02 +0000 (0:00:01.061) 0:04:00.779 ********* 2025-07-12 15:22:06.954301 | orchestrator | ok: [testbed-manager] 2025-07-12 15:22:06.954313 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:22:06.954389 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:22:06.954404 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:22:06.954416 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:22:06.954428 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:22:06.954440 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:22:06.954452 | orchestrator | 2025-07-12 15:22:06.954465 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-07-12 15:22:06.954478 | orchestrator | Saturday 12 July 2025 15:21:02 +0000 (0:00:00.301) 0:04:01.081 ********* 2025-07-12 15:22:06.954490 | orchestrator | ok: [testbed-manager] 2025-07-12 15:22:06.954502 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:22:06.954514 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:22:06.954527 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:22:06.954539 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:22:06.954556 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:22:06.954573 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:22:06.954585 | orchestrator | 2025-07-12 15:22:06.954597 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-07-12 15:22:06.954609 | orchestrator | Saturday 12 July 2025 15:21:02 +0000 (0:00:00.313) 0:04:01.394 ********* 2025-07-12 15:22:06.954622 | orchestrator | ok: [testbed-manager] 2025-07-12 15:22:06.954634 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:22:06.954646 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:22:06.954658 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:22:06.954671 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:22:06.954683 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:22:06.954695 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:22:06.954707 | orchestrator | 2025-07-12 15:22:06.954720 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-07-12 15:22:06.954731 | orchestrator | Saturday 12 July 2025 15:21:03 +0000 (0:00:00.272) 0:04:01.667 ********* 2025-07-12 15:22:06.954748 | orchestrator | ok: [testbed-manager] 2025-07-12 15:22:06.954764 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:22:06.954775 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:22:06.954785 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:22:06.954796 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:22:06.954806 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:22:06.954816 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:22:06.954827 | orchestrator | 2025-07-12 15:22:06.954838 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-07-12 15:22:06.954848 | orchestrator | Saturday 12 July 2025 15:21:08 +0000 (0:00:05.727) 0:04:07.394 ********* 2025-07-12 15:22:06.954868 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:22:06.954884 | orchestrator | 2025-07-12 15:22:06.954895 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-07-12 15:22:06.954906 | orchestrator | Saturday 12 July 2025 15:21:09 +0000 (0:00:00.435) 0:04:07.830 ********* 2025-07-12 15:22:06.954917 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-07-12 15:22:06.954928 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-07-12 15:22:06.954939 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-07-12 15:22:06.954950 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-07-12 15:22:06.954960 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:22:06.954971 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-07-12 15:22:06.954982 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-07-12 15:22:06.954993 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:22:06.955003 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-07-12 15:22:06.955014 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-07-12 15:22:06.955025 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:22:06.955036 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-07-12 15:22:06.955056 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:22:06.955067 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-07-12 15:22:06.955077 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-07-12 15:22:06.955103 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-07-12 15:22:06.955114 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:22:06.955145 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:22:06.955157 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-07-12 15:22:06.955168 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-07-12 15:22:06.955178 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:22:06.955189 | orchestrator | 2025-07-12 15:22:06.955200 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-07-12 15:22:06.955211 | orchestrator | Saturday 12 July 2025 15:21:09 +0000 (0:00:00.375) 0:04:08.206 ********* 2025-07-12 15:22:06.955222 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:22:06.955233 | orchestrator | 2025-07-12 15:22:06.955244 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-07-12 15:22:06.955255 | orchestrator | Saturday 12 July 2025 15:21:10 +0000 (0:00:00.413) 0:04:08.619 ********* 2025-07-12 15:22:06.955265 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-07-12 15:22:06.955276 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-07-12 15:22:06.955287 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:22:06.955298 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-07-12 15:22:06.955309 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:22:06.955319 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-07-12 15:22:06.955330 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:22:06.955357 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-07-12 15:22:06.955368 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:22:06.955379 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-07-12 15:22:06.955390 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:22:06.955401 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:22:06.955412 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-07-12 15:22:06.955422 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:22:06.955433 | orchestrator | 2025-07-12 15:22:06.955444 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-07-12 15:22:06.955455 | orchestrator | Saturday 12 July 2025 15:21:10 +0000 (0:00:00.332) 0:04:08.952 ********* 2025-07-12 15:22:06.955466 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:22:06.955477 | orchestrator | 2025-07-12 15:22:06.955488 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-07-12 15:22:06.955499 | orchestrator | Saturday 12 July 2025 15:21:11 +0000 (0:00:00.604) 0:04:09.556 ********* 2025-07-12 15:22:06.955510 | orchestrator | changed: [testbed-manager] 2025-07-12 15:22:06.955521 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:22:06.955531 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:22:06.955542 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:22:06.955553 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:22:06.955563 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:22:06.955574 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:22:06.955585 | orchestrator | 2025-07-12 15:22:06.955596 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-07-12 15:22:06.955615 | orchestrator | Saturday 12 July 2025 15:21:44 +0000 (0:00:33.337) 0:04:42.894 ********* 2025-07-12 15:22:06.955626 | orchestrator | changed: [testbed-manager] 2025-07-12 15:22:06.955637 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:22:06.955648 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:22:06.955658 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:22:06.955669 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:22:06.955680 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:22:06.955691 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:22:06.955701 | orchestrator | 2025-07-12 15:22:06.955712 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-07-12 15:22:06.955723 | orchestrator | Saturday 12 July 2025 15:21:52 +0000 (0:00:07.840) 0:04:50.735 ********* 2025-07-12 15:22:06.955734 | orchestrator | changed: [testbed-manager] 2025-07-12 15:22:06.955745 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:22:06.955755 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:22:06.955766 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:22:06.955777 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:22:06.955787 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:22:06.955798 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:22:06.955808 | orchestrator | 2025-07-12 15:22:06.955819 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-07-12 15:22:06.955830 | orchestrator | Saturday 12 July 2025 15:21:59 +0000 (0:00:07.569) 0:04:58.304 ********* 2025-07-12 15:22:06.955841 | orchestrator | ok: [testbed-manager] 2025-07-12 15:22:06.955852 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:22:06.955862 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:22:06.955873 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:22:06.955884 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:22:06.955894 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:22:06.955905 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:22:06.955916 | orchestrator | 2025-07-12 15:22:06.955926 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-07-12 15:22:06.955937 | orchestrator | Saturday 12 July 2025 15:22:01 +0000 (0:00:01.663) 0:04:59.968 ********* 2025-07-12 15:22:06.955948 | orchestrator | changed: [testbed-manager] 2025-07-12 15:22:06.955959 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:22:06.955970 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:22:06.955981 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:22:06.955991 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:22:06.956002 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:22:06.956013 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:22:06.956024 | orchestrator | 2025-07-12 15:22:06.956035 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-07-12 15:22:06.956054 | orchestrator | Saturday 12 July 2025 15:22:06 +0000 (0:00:05.419) 0:05:05.387 ********* 2025-07-12 15:22:18.572312 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:22:18.572478 | orchestrator | 2025-07-12 15:22:18.572511 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-07-12 15:22:18.572537 | orchestrator | Saturday 12 July 2025 15:22:07 +0000 (0:00:00.435) 0:05:05.823 ********* 2025-07-12 15:22:18.572554 | orchestrator | changed: [testbed-manager] 2025-07-12 15:22:18.572572 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:22:18.572590 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:22:18.572606 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:22:18.572622 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:22:18.572639 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:22:18.572657 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:22:18.572674 | orchestrator | 2025-07-12 15:22:18.572692 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-07-12 15:22:18.572711 | orchestrator | Saturday 12 July 2025 15:22:08 +0000 (0:00:00.743) 0:05:06.566 ********* 2025-07-12 15:22:18.572760 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:22:18.572780 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:22:18.572799 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:22:18.572818 | orchestrator | ok: [testbed-manager] 2025-07-12 15:22:18.572835 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:22:18.572847 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:22:18.572858 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:22:18.572870 | orchestrator | 2025-07-12 15:22:18.572883 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-07-12 15:22:18.572896 | orchestrator | Saturday 12 July 2025 15:22:09 +0000 (0:00:01.649) 0:05:08.216 ********* 2025-07-12 15:22:18.572915 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:22:18.572934 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:22:18.572953 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:22:18.572973 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:22:18.572992 | orchestrator | changed: [testbed-manager] 2025-07-12 15:22:18.573031 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:22:18.573052 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:22:18.573070 | orchestrator | 2025-07-12 15:22:18.573088 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-07-12 15:22:18.573108 | orchestrator | Saturday 12 July 2025 15:22:10 +0000 (0:00:00.828) 0:05:09.044 ********* 2025-07-12 15:22:18.573127 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:22:18.573148 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:22:18.573169 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:22:18.573187 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:22:18.573206 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:22:18.573224 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:22:18.573241 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:22:18.573257 | orchestrator | 2025-07-12 15:22:18.573274 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-07-12 15:22:18.573293 | orchestrator | Saturday 12 July 2025 15:22:10 +0000 (0:00:00.308) 0:05:09.353 ********* 2025-07-12 15:22:18.573311 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:22:18.573327 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:22:18.573373 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:22:18.573392 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:22:18.573410 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:22:18.573428 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:22:18.573446 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:22:18.573464 | orchestrator | 2025-07-12 15:22:18.573482 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-07-12 15:22:18.573500 | orchestrator | Saturday 12 July 2025 15:22:11 +0000 (0:00:00.410) 0:05:09.764 ********* 2025-07-12 15:22:18.573519 | orchestrator | ok: [testbed-manager] 2025-07-12 15:22:18.573538 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:22:18.573557 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:22:18.573574 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:22:18.573589 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:22:18.573599 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:22:18.573610 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:22:18.573620 | orchestrator | 2025-07-12 15:22:18.573638 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-07-12 15:22:18.573656 | orchestrator | Saturday 12 July 2025 15:22:11 +0000 (0:00:00.341) 0:05:10.105 ********* 2025-07-12 15:22:18.573674 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:22:18.573693 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:22:18.573711 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:22:18.573726 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:22:18.573737 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:22:18.573748 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:22:18.573758 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:22:18.573769 | orchestrator | 2025-07-12 15:22:18.573796 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-07-12 15:22:18.573808 | orchestrator | Saturday 12 July 2025 15:22:11 +0000 (0:00:00.347) 0:05:10.453 ********* 2025-07-12 15:22:18.573819 | orchestrator | ok: [testbed-manager] 2025-07-12 15:22:18.573829 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:22:18.573840 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:22:18.573851 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:22:18.573861 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:22:18.573872 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:22:18.573883 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:22:18.573894 | orchestrator | 2025-07-12 15:22:18.573904 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-07-12 15:22:18.573915 | orchestrator | Saturday 12 July 2025 15:22:12 +0000 (0:00:00.342) 0:05:10.795 ********* 2025-07-12 15:22:18.573926 | orchestrator | ok: [testbed-manager] =>  2025-07-12 15:22:18.573936 | orchestrator |  docker_version: 5:27.5.1 2025-07-12 15:22:18.573947 | orchestrator | ok: [testbed-node-0] =>  2025-07-12 15:22:18.573957 | orchestrator |  docker_version: 5:27.5.1 2025-07-12 15:22:18.573968 | orchestrator | ok: [testbed-node-1] =>  2025-07-12 15:22:18.573978 | orchestrator |  docker_version: 5:27.5.1 2025-07-12 15:22:18.573997 | orchestrator | ok: [testbed-node-2] =>  2025-07-12 15:22:18.574008 | orchestrator |  docker_version: 5:27.5.1 2025-07-12 15:22:18.574106 | orchestrator | ok: [testbed-node-3] =>  2025-07-12 15:22:18.574124 | orchestrator |  docker_version: 5:27.5.1 2025-07-12 15:22:18.574158 | orchestrator | ok: [testbed-node-4] =>  2025-07-12 15:22:18.574170 | orchestrator |  docker_version: 5:27.5.1 2025-07-12 15:22:18.574181 | orchestrator | ok: [testbed-node-5] =>  2025-07-12 15:22:18.574192 | orchestrator |  docker_version: 5:27.5.1 2025-07-12 15:22:18.574202 | orchestrator | 2025-07-12 15:22:18.574213 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-07-12 15:22:18.574224 | orchestrator | Saturday 12 July 2025 15:22:12 +0000 (0:00:00.332) 0:05:11.128 ********* 2025-07-12 15:22:18.574234 | orchestrator | ok: [testbed-manager] =>  2025-07-12 15:22:18.574245 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-12 15:22:18.574255 | orchestrator | ok: [testbed-node-0] =>  2025-07-12 15:22:18.574265 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-12 15:22:18.574276 | orchestrator | ok: [testbed-node-1] =>  2025-07-12 15:22:18.574286 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-12 15:22:18.574297 | orchestrator | ok: [testbed-node-2] =>  2025-07-12 15:22:18.574307 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-12 15:22:18.574318 | orchestrator | ok: [testbed-node-3] =>  2025-07-12 15:22:18.574328 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-12 15:22:18.574363 | orchestrator | ok: [testbed-node-4] =>  2025-07-12 15:22:18.574375 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-12 15:22:18.574386 | orchestrator | ok: [testbed-node-5] =>  2025-07-12 15:22:18.574396 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-12 15:22:18.574407 | orchestrator | 2025-07-12 15:22:18.574418 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-07-12 15:22:18.574429 | orchestrator | Saturday 12 July 2025 15:22:13 +0000 (0:00:00.475) 0:05:11.603 ********* 2025-07-12 15:22:18.574439 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:22:18.574450 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:22:18.574460 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:22:18.574471 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:22:18.574481 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:22:18.574492 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:22:18.574502 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:22:18.574513 | orchestrator | 2025-07-12 15:22:18.574523 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-07-12 15:22:18.574534 | orchestrator | Saturday 12 July 2025 15:22:13 +0000 (0:00:00.278) 0:05:11.881 ********* 2025-07-12 15:22:18.574545 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:22:18.574555 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:22:18.574576 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:22:18.574587 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:22:18.574597 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:22:18.574608 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:22:18.574618 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:22:18.574629 | orchestrator | 2025-07-12 15:22:18.574639 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-07-12 15:22:18.574650 | orchestrator | Saturday 12 July 2025 15:22:13 +0000 (0:00:00.295) 0:05:12.176 ********* 2025-07-12 15:22:18.574664 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:22:18.574677 | orchestrator | 2025-07-12 15:22:18.574688 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-07-12 15:22:18.574699 | orchestrator | Saturday 12 July 2025 15:22:14 +0000 (0:00:00.500) 0:05:12.677 ********* 2025-07-12 15:22:18.574709 | orchestrator | ok: [testbed-manager] 2025-07-12 15:22:18.574720 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:22:18.574731 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:22:18.574741 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:22:18.574752 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:22:18.574762 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:22:18.574773 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:22:18.574783 | orchestrator | 2025-07-12 15:22:18.574794 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-07-12 15:22:18.574805 | orchestrator | Saturday 12 July 2025 15:22:15 +0000 (0:00:00.873) 0:05:13.550 ********* 2025-07-12 15:22:18.574815 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:22:18.574826 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:22:18.574836 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:22:18.574847 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:22:18.574857 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:22:18.574868 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:22:18.574878 | orchestrator | ok: [testbed-manager] 2025-07-12 15:22:18.574889 | orchestrator | 2025-07-12 15:22:18.574900 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-07-12 15:22:18.574912 | orchestrator | Saturday 12 July 2025 15:22:17 +0000 (0:00:02.859) 0:05:16.410 ********* 2025-07-12 15:22:18.574923 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-07-12 15:22:18.574934 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-07-12 15:22:18.574945 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-07-12 15:22:18.574955 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-07-12 15:22:18.574966 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-07-12 15:22:18.574984 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-07-12 15:22:18.575003 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:22:18.575021 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-07-12 15:22:18.575039 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-07-12 15:22:18.575058 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-07-12 15:22:18.575077 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:22:18.575095 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-07-12 15:22:18.575112 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-07-12 15:22:18.575130 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-07-12 15:22:18.575149 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:22:18.575176 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-07-12 15:22:18.575196 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-07-12 15:22:18.575227 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-07-12 15:23:16.029122 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:23:16.029241 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-07-12 15:23:16.029286 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-07-12 15:23:16.029298 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-07-12 15:23:16.029309 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:23:16.029320 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:23:16.029330 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-07-12 15:23:16.029341 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-07-12 15:23:16.029352 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-07-12 15:23:16.029363 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:23:16.029374 | orchestrator | 2025-07-12 15:23:16.029386 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-07-12 15:23:16.029398 | orchestrator | Saturday 12 July 2025 15:22:18 +0000 (0:00:00.834) 0:05:17.244 ********* 2025-07-12 15:23:16.029409 | orchestrator | ok: [testbed-manager] 2025-07-12 15:23:16.029421 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:23:16.029432 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:23:16.029442 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:23:16.029453 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:23:16.029464 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:23:16.029475 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:23:16.029486 | orchestrator | 2025-07-12 15:23:16.029497 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-07-12 15:23:16.029508 | orchestrator | Saturday 12 July 2025 15:22:24 +0000 (0:00:05.984) 0:05:23.229 ********* 2025-07-12 15:23:16.029518 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:23:16.029529 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:23:16.029540 | orchestrator | ok: [testbed-manager] 2025-07-12 15:23:16.029551 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:23:16.029561 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:23:16.029572 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:23:16.029583 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:23:16.029594 | orchestrator | 2025-07-12 15:23:16.029605 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-07-12 15:23:16.029615 | orchestrator | Saturday 12 July 2025 15:22:25 +0000 (0:00:01.035) 0:05:24.265 ********* 2025-07-12 15:23:16.029626 | orchestrator | ok: [testbed-manager] 2025-07-12 15:23:16.029637 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:23:16.029648 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:23:16.029659 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:23:16.029671 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:23:16.029684 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:23:16.029695 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:23:16.029708 | orchestrator | 2025-07-12 15:23:16.029720 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-07-12 15:23:16.029732 | orchestrator | Saturday 12 July 2025 15:22:33 +0000 (0:00:07.253) 0:05:31.518 ********* 2025-07-12 15:23:16.029744 | orchestrator | changed: [testbed-manager] 2025-07-12 15:23:16.029756 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:23:16.029768 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:23:16.029780 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:23:16.029792 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:23:16.029804 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:23:16.029815 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:23:16.029828 | orchestrator | 2025-07-12 15:23:16.029840 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-07-12 15:23:16.029853 | orchestrator | Saturday 12 July 2025 15:22:36 +0000 (0:00:03.192) 0:05:34.711 ********* 2025-07-12 15:23:16.029865 | orchestrator | ok: [testbed-manager] 2025-07-12 15:23:16.029877 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:23:16.029889 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:23:16.029901 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:23:16.029935 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:23:16.029948 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:23:16.029960 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:23:16.029972 | orchestrator | 2025-07-12 15:23:16.029985 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-07-12 15:23:16.029997 | orchestrator | Saturday 12 July 2025 15:22:37 +0000 (0:00:01.510) 0:05:36.221 ********* 2025-07-12 15:23:16.030009 | orchestrator | ok: [testbed-manager] 2025-07-12 15:23:16.030082 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:23:16.030094 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:23:16.030105 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:23:16.030115 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:23:16.030127 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:23:16.030137 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:23:16.030148 | orchestrator | 2025-07-12 15:23:16.030159 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-07-12 15:23:16.030170 | orchestrator | Saturday 12 July 2025 15:22:39 +0000 (0:00:01.276) 0:05:37.498 ********* 2025-07-12 15:23:16.030181 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:23:16.030191 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:23:16.030202 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:23:16.030212 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:23:16.030223 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:23:16.030233 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:23:16.030260 | orchestrator | changed: [testbed-manager] 2025-07-12 15:23:16.030272 | orchestrator | 2025-07-12 15:23:16.030282 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-07-12 15:23:16.030293 | orchestrator | Saturday 12 July 2025 15:22:39 +0000 (0:00:00.627) 0:05:38.125 ********* 2025-07-12 15:23:16.030304 | orchestrator | ok: [testbed-manager] 2025-07-12 15:23:16.030315 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:23:16.030325 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:23:16.030336 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:23:16.030347 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:23:16.030357 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:23:16.030368 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:23:16.030379 | orchestrator | 2025-07-12 15:23:16.030404 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-07-12 15:23:16.030415 | orchestrator | Saturday 12 July 2025 15:22:49 +0000 (0:00:09.872) 0:05:47.998 ********* 2025-07-12 15:23:16.030426 | orchestrator | changed: [testbed-manager] 2025-07-12 15:23:16.030455 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:23:16.030467 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:23:16.030477 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:23:16.030488 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:23:16.030499 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:23:16.030509 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:23:16.030520 | orchestrator | 2025-07-12 15:23:16.030531 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-07-12 15:23:16.030542 | orchestrator | Saturday 12 July 2025 15:22:50 +0000 (0:00:00.876) 0:05:48.874 ********* 2025-07-12 15:23:16.030552 | orchestrator | ok: [testbed-manager] 2025-07-12 15:23:16.030563 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:23:16.030574 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:23:16.030584 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:23:16.030595 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:23:16.030605 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:23:16.030616 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:23:16.030626 | orchestrator | 2025-07-12 15:23:16.030637 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-07-12 15:23:16.030648 | orchestrator | Saturday 12 July 2025 15:22:59 +0000 (0:00:08.597) 0:05:57.472 ********* 2025-07-12 15:23:16.030658 | orchestrator | ok: [testbed-manager] 2025-07-12 15:23:16.030678 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:23:16.030689 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:23:16.030699 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:23:16.030710 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:23:16.030721 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:23:16.030731 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:23:16.030742 | orchestrator | 2025-07-12 15:23:16.030753 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-07-12 15:23:16.030763 | orchestrator | Saturday 12 July 2025 15:23:09 +0000 (0:00:10.656) 0:06:08.128 ********* 2025-07-12 15:23:16.030774 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-07-12 15:23:16.030785 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-07-12 15:23:16.030796 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-07-12 15:23:16.030806 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-07-12 15:23:16.030817 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-07-12 15:23:16.030827 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-07-12 15:23:16.030838 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-07-12 15:23:16.030849 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-07-12 15:23:16.030859 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-07-12 15:23:16.030870 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-07-12 15:23:16.030880 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-07-12 15:23:16.030891 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-07-12 15:23:16.030901 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-07-12 15:23:16.030912 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-07-12 15:23:16.030923 | orchestrator | 2025-07-12 15:23:16.030933 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-07-12 15:23:16.030944 | orchestrator | Saturday 12 July 2025 15:23:10 +0000 (0:00:01.200) 0:06:09.329 ********* 2025-07-12 15:23:16.030955 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:23:16.030965 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:23:16.030976 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:23:16.030987 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:23:16.030997 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:23:16.031008 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:23:16.031019 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:23:16.031029 | orchestrator | 2025-07-12 15:23:16.031040 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-07-12 15:23:16.031050 | orchestrator | Saturday 12 July 2025 15:23:11 +0000 (0:00:00.490) 0:06:09.820 ********* 2025-07-12 15:23:16.031061 | orchestrator | ok: [testbed-manager] 2025-07-12 15:23:16.031072 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:23:16.031083 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:23:16.031093 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:23:16.031104 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:23:16.031115 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:23:16.031125 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:23:16.031136 | orchestrator | 2025-07-12 15:23:16.031147 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-07-12 15:23:16.031159 | orchestrator | Saturday 12 July 2025 15:23:15 +0000 (0:00:03.781) 0:06:13.601 ********* 2025-07-12 15:23:16.031170 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:23:16.031180 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:23:16.031191 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:23:16.031201 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:23:16.031212 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:23:16.031222 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:23:16.031233 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:23:16.031282 | orchestrator | 2025-07-12 15:23:16.031295 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-07-12 15:23:16.031313 | orchestrator | Saturday 12 July 2025 15:23:15 +0000 (0:00:00.560) 0:06:14.161 ********* 2025-07-12 15:23:16.031324 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-07-12 15:23:16.031335 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-07-12 15:23:16.031346 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:23:16.031356 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-07-12 15:23:16.031367 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-07-12 15:23:16.031377 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:23:16.031388 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-07-12 15:23:16.031399 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-07-12 15:23:16.031410 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:23:16.031421 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-07-12 15:23:16.031438 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-07-12 15:23:35.633095 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:23:35.633242 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-07-12 15:23:35.633274 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-07-12 15:23:35.633286 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:23:35.633297 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-07-12 15:23:35.633308 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-07-12 15:23:35.633320 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:23:35.633331 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-07-12 15:23:35.633342 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-07-12 15:23:35.633353 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:23:35.633364 | orchestrator | 2025-07-12 15:23:35.633376 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-07-12 15:23:35.633388 | orchestrator | Saturday 12 July 2025 15:23:16 +0000 (0:00:00.578) 0:06:14.740 ********* 2025-07-12 15:23:35.633399 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:23:35.633410 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:23:35.633422 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:23:35.633432 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:23:35.633443 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:23:35.633454 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:23:35.633465 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:23:35.633476 | orchestrator | 2025-07-12 15:23:35.633487 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-07-12 15:23:35.633498 | orchestrator | Saturday 12 July 2025 15:23:16 +0000 (0:00:00.514) 0:06:15.255 ********* 2025-07-12 15:23:35.633510 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:23:35.633522 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:23:35.633532 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:23:35.633543 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:23:35.633554 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:23:35.633565 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:23:35.633576 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:23:35.633587 | orchestrator | 2025-07-12 15:23:35.633597 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-07-12 15:23:35.633609 | orchestrator | Saturday 12 July 2025 15:23:17 +0000 (0:00:00.475) 0:06:15.730 ********* 2025-07-12 15:23:35.633619 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:23:35.633630 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:23:35.633644 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:23:35.633656 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:23:35.633668 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:23:35.633680 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:23:35.633717 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:23:35.633729 | orchestrator | 2025-07-12 15:23:35.633741 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-07-12 15:23:35.633753 | orchestrator | Saturday 12 July 2025 15:23:18 +0000 (0:00:00.766) 0:06:16.497 ********* 2025-07-12 15:23:35.633764 | orchestrator | ok: [testbed-manager] 2025-07-12 15:23:35.633777 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:23:35.633789 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:23:35.633800 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:23:35.633812 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:23:35.633824 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:23:35.633835 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:23:35.633847 | orchestrator | 2025-07-12 15:23:35.633859 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-07-12 15:23:35.633871 | orchestrator | Saturday 12 July 2025 15:23:19 +0000 (0:00:01.661) 0:06:18.159 ********* 2025-07-12 15:23:35.633884 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:23:35.633898 | orchestrator | 2025-07-12 15:23:35.633912 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-07-12 15:23:35.633924 | orchestrator | Saturday 12 July 2025 15:23:20 +0000 (0:00:00.890) 0:06:19.049 ********* 2025-07-12 15:23:35.633936 | orchestrator | ok: [testbed-manager] 2025-07-12 15:23:35.633949 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:23:35.633961 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:23:35.633973 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:23:35.633985 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:23:35.633996 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:23:35.634007 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:23:35.634062 | orchestrator | 2025-07-12 15:23:35.634074 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-07-12 15:23:35.634085 | orchestrator | Saturday 12 July 2025 15:23:21 +0000 (0:00:00.887) 0:06:19.936 ********* 2025-07-12 15:23:35.634096 | orchestrator | ok: [testbed-manager] 2025-07-12 15:23:35.634106 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:23:35.634117 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:23:35.634128 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:23:35.634138 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:23:35.634149 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:23:35.634160 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:23:35.634170 | orchestrator | 2025-07-12 15:23:35.634181 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-07-12 15:23:35.634192 | orchestrator | Saturday 12 July 2025 15:23:22 +0000 (0:00:01.056) 0:06:20.993 ********* 2025-07-12 15:23:35.634202 | orchestrator | ok: [testbed-manager] 2025-07-12 15:23:35.634238 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:23:35.634249 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:23:35.634260 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:23:35.634270 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:23:35.634298 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:23:35.634309 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:23:35.634320 | orchestrator | 2025-07-12 15:23:35.634335 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-07-12 15:23:35.634346 | orchestrator | Saturday 12 July 2025 15:23:23 +0000 (0:00:01.375) 0:06:22.369 ********* 2025-07-12 15:23:35.634374 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:23:35.634386 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:23:35.634396 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:23:35.634407 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:23:35.634418 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:23:35.634428 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:23:35.634439 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:23:35.634458 | orchestrator | 2025-07-12 15:23:35.634469 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-07-12 15:23:35.634480 | orchestrator | Saturday 12 July 2025 15:23:25 +0000 (0:00:01.414) 0:06:23.784 ********* 2025-07-12 15:23:35.634491 | orchestrator | ok: [testbed-manager] 2025-07-12 15:23:35.634502 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:23:35.634512 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:23:35.634523 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:23:35.634533 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:23:35.634544 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:23:35.634554 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:23:35.634565 | orchestrator | 2025-07-12 15:23:35.634575 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-07-12 15:23:35.634586 | orchestrator | Saturday 12 July 2025 15:23:26 +0000 (0:00:01.346) 0:06:25.130 ********* 2025-07-12 15:23:35.634597 | orchestrator | changed: [testbed-manager] 2025-07-12 15:23:35.634608 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:23:35.634618 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:23:35.634629 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:23:35.634640 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:23:35.634650 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:23:35.634661 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:23:35.634671 | orchestrator | 2025-07-12 15:23:35.634682 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-07-12 15:23:35.634692 | orchestrator | Saturday 12 July 2025 15:23:28 +0000 (0:00:01.568) 0:06:26.699 ********* 2025-07-12 15:23:35.634703 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:23:35.634715 | orchestrator | 2025-07-12 15:23:35.634725 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-07-12 15:23:35.634736 | orchestrator | Saturday 12 July 2025 15:23:29 +0000 (0:00:01.053) 0:06:27.752 ********* 2025-07-12 15:23:35.634746 | orchestrator | ok: [testbed-manager] 2025-07-12 15:23:35.634757 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:23:35.634767 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:23:35.634778 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:23:35.634789 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:23:35.634799 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:23:35.634810 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:23:35.634820 | orchestrator | 2025-07-12 15:23:35.634831 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-07-12 15:23:35.634842 | orchestrator | Saturday 12 July 2025 15:23:30 +0000 (0:00:01.395) 0:06:29.148 ********* 2025-07-12 15:23:35.634852 | orchestrator | ok: [testbed-manager] 2025-07-12 15:23:35.634863 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:23:35.634873 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:23:35.634884 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:23:35.634894 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:23:35.634905 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:23:35.634915 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:23:35.634926 | orchestrator | 2025-07-12 15:23:35.634937 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-07-12 15:23:35.634947 | orchestrator | Saturday 12 July 2025 15:23:31 +0000 (0:00:01.124) 0:06:30.272 ********* 2025-07-12 15:23:35.634958 | orchestrator | ok: [testbed-manager] 2025-07-12 15:23:35.634968 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:23:35.634979 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:23:35.634990 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:23:35.635000 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:23:35.635011 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:23:35.635021 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:23:35.635032 | orchestrator | 2025-07-12 15:23:35.635042 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-07-12 15:23:35.635060 | orchestrator | Saturday 12 July 2025 15:23:33 +0000 (0:00:01.449) 0:06:31.722 ********* 2025-07-12 15:23:35.635070 | orchestrator | ok: [testbed-manager] 2025-07-12 15:23:35.635081 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:23:35.635091 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:23:35.635102 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:23:35.635112 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:23:35.635123 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:23:35.635133 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:23:35.635143 | orchestrator | 2025-07-12 15:23:35.635154 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-07-12 15:23:35.635165 | orchestrator | Saturday 12 July 2025 15:23:34 +0000 (0:00:01.176) 0:06:32.898 ********* 2025-07-12 15:23:35.635176 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:23:35.635187 | orchestrator | 2025-07-12 15:23:35.635197 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-12 15:23:35.635241 | orchestrator | Saturday 12 July 2025 15:23:35 +0000 (0:00:00.882) 0:06:33.781 ********* 2025-07-12 15:23:35.635254 | orchestrator | 2025-07-12 15:23:35.635265 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-12 15:23:35.635275 | orchestrator | Saturday 12 July 2025 15:23:35 +0000 (0:00:00.037) 0:06:33.818 ********* 2025-07-12 15:23:35.635286 | orchestrator | 2025-07-12 15:23:35.635296 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-12 15:23:35.635307 | orchestrator | Saturday 12 July 2025 15:23:35 +0000 (0:00:00.038) 0:06:33.856 ********* 2025-07-12 15:23:35.635318 | orchestrator | 2025-07-12 15:23:35.635334 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-12 15:23:35.635345 | orchestrator | Saturday 12 July 2025 15:23:35 +0000 (0:00:00.044) 0:06:33.900 ********* 2025-07-12 15:23:35.635356 | orchestrator | 2025-07-12 15:23:35.635374 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-12 15:24:01.770550 | orchestrator | Saturday 12 July 2025 15:23:35 +0000 (0:00:00.039) 0:06:33.940 ********* 2025-07-12 15:24:01.770671 | orchestrator | 2025-07-12 15:24:01.770689 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-12 15:24:01.770702 | orchestrator | Saturday 12 July 2025 15:23:35 +0000 (0:00:00.037) 0:06:33.977 ********* 2025-07-12 15:24:01.770714 | orchestrator | 2025-07-12 15:24:01.770725 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-12 15:24:01.770737 | orchestrator | Saturday 12 July 2025 15:23:35 +0000 (0:00:00.044) 0:06:34.021 ********* 2025-07-12 15:24:01.770747 | orchestrator | 2025-07-12 15:24:01.770758 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-07-12 15:24:01.770769 | orchestrator | Saturday 12 July 2025 15:23:35 +0000 (0:00:00.038) 0:06:34.059 ********* 2025-07-12 15:24:01.770780 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:24:01.770792 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:24:01.770803 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:24:01.770813 | orchestrator | 2025-07-12 15:24:01.770824 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-07-12 15:24:01.770835 | orchestrator | Saturday 12 July 2025 15:23:36 +0000 (0:00:01.266) 0:06:35.326 ********* 2025-07-12 15:24:01.770847 | orchestrator | changed: [testbed-manager] 2025-07-12 15:24:01.770859 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:24:01.770875 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:24:01.770893 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:24:01.770911 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:24:01.770930 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:24:01.770973 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:24:01.770984 | orchestrator | 2025-07-12 15:24:01.770995 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-07-12 15:24:01.771033 | orchestrator | Saturday 12 July 2025 15:23:38 +0000 (0:00:01.323) 0:06:36.650 ********* 2025-07-12 15:24:01.771044 | orchestrator | changed: [testbed-manager] 2025-07-12 15:24:01.771055 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:24:01.771066 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:24:01.771076 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:24:01.771087 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:24:01.771100 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:24:01.771112 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:24:01.771124 | orchestrator | 2025-07-12 15:24:01.771137 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-07-12 15:24:01.771150 | orchestrator | Saturday 12 July 2025 15:23:39 +0000 (0:00:01.177) 0:06:37.828 ********* 2025-07-12 15:24:01.771185 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:24:01.771198 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:24:01.771210 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:24:01.771222 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:24:01.771235 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:24:01.771247 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:24:01.771259 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:24:01.771271 | orchestrator | 2025-07-12 15:24:01.771283 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-07-12 15:24:01.771295 | orchestrator | Saturday 12 July 2025 15:23:41 +0000 (0:00:02.425) 0:06:40.254 ********* 2025-07-12 15:24:01.771308 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:24:01.771321 | orchestrator | 2025-07-12 15:24:01.771332 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-07-12 15:24:01.771345 | orchestrator | Saturday 12 July 2025 15:23:41 +0000 (0:00:00.111) 0:06:40.365 ********* 2025-07-12 15:24:01.771357 | orchestrator | ok: [testbed-manager] 2025-07-12 15:24:01.771369 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:24:01.771381 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:24:01.771393 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:24:01.771406 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:24:01.771417 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:24:01.771487 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:24:01.771498 | orchestrator | 2025-07-12 15:24:01.771509 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-07-12 15:24:01.771521 | orchestrator | Saturday 12 July 2025 15:23:42 +0000 (0:00:00.992) 0:06:41.358 ********* 2025-07-12 15:24:01.771531 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:24:01.771542 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:24:01.771553 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:24:01.771563 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:24:01.771574 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:24:01.771585 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:24:01.771595 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:24:01.771606 | orchestrator | 2025-07-12 15:24:01.771617 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-07-12 15:24:01.771628 | orchestrator | Saturday 12 July 2025 15:23:43 +0000 (0:00:00.700) 0:06:42.058 ********* 2025-07-12 15:24:01.771653 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:24:01.771667 | orchestrator | 2025-07-12 15:24:01.771678 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-07-12 15:24:01.771688 | orchestrator | Saturday 12 July 2025 15:23:44 +0000 (0:00:00.870) 0:06:42.928 ********* 2025-07-12 15:24:01.771699 | orchestrator | ok: [testbed-manager] 2025-07-12 15:24:01.771710 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:24:01.771721 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:24:01.771731 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:24:01.771742 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:24:01.771764 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:24:01.771775 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:24:01.771786 | orchestrator | 2025-07-12 15:24:01.771797 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-07-12 15:24:01.771823 | orchestrator | Saturday 12 July 2025 15:23:45 +0000 (0:00:00.837) 0:06:43.766 ********* 2025-07-12 15:24:01.771834 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-07-12 15:24:01.771845 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-07-12 15:24:01.771875 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-07-12 15:24:01.771887 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-07-12 15:24:01.771897 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-07-12 15:24:01.771908 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-07-12 15:24:01.771919 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-07-12 15:24:01.771930 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-07-12 15:24:01.771940 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-07-12 15:24:01.771951 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-07-12 15:24:01.771962 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-07-12 15:24:01.771972 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-07-12 15:24:01.771983 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-07-12 15:24:01.771993 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-07-12 15:24:01.772004 | orchestrator | 2025-07-12 15:24:01.772015 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-07-12 15:24:01.772025 | orchestrator | Saturday 12 July 2025 15:23:48 +0000 (0:00:02.701) 0:06:46.467 ********* 2025-07-12 15:24:01.772036 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:24:01.772047 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:24:01.772058 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:24:01.772068 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:24:01.772079 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:24:01.772089 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:24:01.772100 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:24:01.772110 | orchestrator | 2025-07-12 15:24:01.772121 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-07-12 15:24:01.772132 | orchestrator | Saturday 12 July 2025 15:23:48 +0000 (0:00:00.551) 0:06:47.019 ********* 2025-07-12 15:24:01.772145 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:24:01.772207 | orchestrator | 2025-07-12 15:24:01.772221 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-07-12 15:24:01.772232 | orchestrator | Saturday 12 July 2025 15:23:49 +0000 (0:00:00.865) 0:06:47.884 ********* 2025-07-12 15:24:01.772243 | orchestrator | ok: [testbed-manager] 2025-07-12 15:24:01.772254 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:24:01.772264 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:24:01.772275 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:24:01.772286 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:24:01.772297 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:24:01.772307 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:24:01.772318 | orchestrator | 2025-07-12 15:24:01.772329 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-07-12 15:24:01.772339 | orchestrator | Saturday 12 July 2025 15:23:50 +0000 (0:00:01.173) 0:06:49.058 ********* 2025-07-12 15:24:01.772350 | orchestrator | ok: [testbed-manager] 2025-07-12 15:24:01.772361 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:24:01.772371 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:24:01.772382 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:24:01.772402 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:24:01.772413 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:24:01.772423 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:24:01.772434 | orchestrator | 2025-07-12 15:24:01.772445 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-07-12 15:24:01.772456 | orchestrator | Saturday 12 July 2025 15:23:51 +0000 (0:00:00.875) 0:06:49.934 ********* 2025-07-12 15:24:01.772467 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:24:01.772478 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:24:01.772488 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:24:01.772499 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:24:01.772510 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:24:01.772520 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:24:01.772531 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:24:01.772541 | orchestrator | 2025-07-12 15:24:01.772552 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-07-12 15:24:01.772563 | orchestrator | Saturday 12 July 2025 15:23:51 +0000 (0:00:00.471) 0:06:50.405 ********* 2025-07-12 15:24:01.772574 | orchestrator | ok: [testbed-manager] 2025-07-12 15:24:01.772584 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:24:01.772595 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:24:01.772606 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:24:01.772616 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:24:01.772627 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:24:01.772637 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:24:01.772648 | orchestrator | 2025-07-12 15:24:01.772659 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-07-12 15:24:01.772669 | orchestrator | Saturday 12 July 2025 15:23:53 +0000 (0:00:01.403) 0:06:51.808 ********* 2025-07-12 15:24:01.772680 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:24:01.772691 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:24:01.772701 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:24:01.772712 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:24:01.772723 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:24:01.772733 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:24:01.772744 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:24:01.772755 | orchestrator | 2025-07-12 15:24:01.772765 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-07-12 15:24:01.772776 | orchestrator | Saturday 12 July 2025 15:23:53 +0000 (0:00:00.515) 0:06:52.324 ********* 2025-07-12 15:24:01.772787 | orchestrator | ok: [testbed-manager] 2025-07-12 15:24:01.772797 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:24:01.772808 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:24:01.772824 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:24:01.772835 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:24:01.772846 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:24:01.772857 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:24:01.772867 | orchestrator | 2025-07-12 15:24:01.772886 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-07-12 15:24:32.722362 | orchestrator | Saturday 12 July 2025 15:24:01 +0000 (0:00:07.877) 0:07:00.201 ********* 2025-07-12 15:24:32.722503 | orchestrator | ok: [testbed-manager] 2025-07-12 15:24:32.722530 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:24:32.722550 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:24:32.722571 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:24:32.722590 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:24:32.722608 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:24:32.722627 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:24:32.722646 | orchestrator | 2025-07-12 15:24:32.722666 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-07-12 15:24:32.722685 | orchestrator | Saturday 12 July 2025 15:24:03 +0000 (0:00:01.323) 0:07:01.525 ********* 2025-07-12 15:24:32.722704 | orchestrator | ok: [testbed-manager] 2025-07-12 15:24:32.722723 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:24:32.722771 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:24:32.722784 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:24:32.722794 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:24:32.722805 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:24:32.722815 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:24:32.722826 | orchestrator | 2025-07-12 15:24:32.722836 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-07-12 15:24:32.722847 | orchestrator | Saturday 12 July 2025 15:24:04 +0000 (0:00:01.708) 0:07:03.233 ********* 2025-07-12 15:24:32.722858 | orchestrator | ok: [testbed-manager] 2025-07-12 15:24:32.722868 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:24:32.722878 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:24:32.722889 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:24:32.722901 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:24:32.722913 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:24:32.722925 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:24:32.722937 | orchestrator | 2025-07-12 15:24:32.722949 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-07-12 15:24:32.722961 | orchestrator | Saturday 12 July 2025 15:24:06 +0000 (0:00:01.622) 0:07:04.856 ********* 2025-07-12 15:24:32.722973 | orchestrator | ok: [testbed-manager] 2025-07-12 15:24:32.722985 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:24:32.722997 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:24:32.723008 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:24:32.723028 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:24:32.723046 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:24:32.723066 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:24:32.723086 | orchestrator | 2025-07-12 15:24:32.723101 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-07-12 15:24:32.723172 | orchestrator | Saturday 12 July 2025 15:24:07 +0000 (0:00:01.091) 0:07:05.947 ********* 2025-07-12 15:24:32.723194 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:24:32.723214 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:24:32.723233 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:24:32.723254 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:24:32.723275 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:24:32.723293 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:24:32.723311 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:24:32.723328 | orchestrator | 2025-07-12 15:24:32.723344 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-07-12 15:24:32.723355 | orchestrator | Saturday 12 July 2025 15:24:08 +0000 (0:00:00.762) 0:07:06.710 ********* 2025-07-12 15:24:32.723365 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:24:32.723376 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:24:32.723386 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:24:32.723397 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:24:32.723407 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:24:32.723418 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:24:32.723428 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:24:32.723438 | orchestrator | 2025-07-12 15:24:32.723449 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-07-12 15:24:32.723459 | orchestrator | Saturday 12 July 2025 15:24:08 +0000 (0:00:00.516) 0:07:07.227 ********* 2025-07-12 15:24:32.723470 | orchestrator | ok: [testbed-manager] 2025-07-12 15:24:32.723481 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:24:32.723491 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:24:32.723502 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:24:32.723512 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:24:32.723523 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:24:32.723533 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:24:32.723544 | orchestrator | 2025-07-12 15:24:32.723554 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-07-12 15:24:32.723565 | orchestrator | Saturday 12 July 2025 15:24:09 +0000 (0:00:00.651) 0:07:07.879 ********* 2025-07-12 15:24:32.723586 | orchestrator | ok: [testbed-manager] 2025-07-12 15:24:32.723597 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:24:32.723608 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:24:32.723618 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:24:32.723628 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:24:32.723639 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:24:32.723649 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:24:32.723660 | orchestrator | 2025-07-12 15:24:32.723670 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-07-12 15:24:32.723681 | orchestrator | Saturday 12 July 2025 15:24:09 +0000 (0:00:00.528) 0:07:08.407 ********* 2025-07-12 15:24:32.723691 | orchestrator | ok: [testbed-manager] 2025-07-12 15:24:32.723702 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:24:32.723712 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:24:32.723723 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:24:32.723733 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:24:32.723750 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:24:32.723767 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:24:32.723786 | orchestrator | 2025-07-12 15:24:32.723804 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-07-12 15:24:32.723818 | orchestrator | Saturday 12 July 2025 15:24:10 +0000 (0:00:00.509) 0:07:08.917 ********* 2025-07-12 15:24:32.723829 | orchestrator | ok: [testbed-manager] 2025-07-12 15:24:32.723840 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:24:32.723858 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:24:32.723895 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:24:32.723916 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:24:32.723934 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:24:32.723948 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:24:32.723958 | orchestrator | 2025-07-12 15:24:32.723969 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-07-12 15:24:32.724000 | orchestrator | Saturday 12 July 2025 15:24:16 +0000 (0:00:05.707) 0:07:14.624 ********* 2025-07-12 15:24:32.724012 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:24:32.724023 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:24:32.724033 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:24:32.724044 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:24:32.724054 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:24:32.724064 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:24:32.724075 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:24:32.724085 | orchestrator | 2025-07-12 15:24:32.724096 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-07-12 15:24:32.724127 | orchestrator | Saturday 12 July 2025 15:24:16 +0000 (0:00:00.526) 0:07:15.151 ********* 2025-07-12 15:24:32.724141 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:24:32.724155 | orchestrator | 2025-07-12 15:24:32.724166 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-07-12 15:24:32.724177 | orchestrator | Saturday 12 July 2025 15:24:17 +0000 (0:00:00.966) 0:07:16.118 ********* 2025-07-12 15:24:32.724187 | orchestrator | ok: [testbed-manager] 2025-07-12 15:24:32.724198 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:24:32.724208 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:24:32.724219 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:24:32.724229 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:24:32.724240 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:24:32.724250 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:24:32.724261 | orchestrator | 2025-07-12 15:24:32.724271 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-07-12 15:24:32.724282 | orchestrator | Saturday 12 July 2025 15:24:19 +0000 (0:00:01.896) 0:07:18.014 ********* 2025-07-12 15:24:32.724293 | orchestrator | ok: [testbed-manager] 2025-07-12 15:24:32.724304 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:24:32.724314 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:24:32.724333 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:24:32.724344 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:24:32.724355 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:24:32.724365 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:24:32.724376 | orchestrator | 2025-07-12 15:24:32.724386 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-07-12 15:24:32.724397 | orchestrator | Saturday 12 July 2025 15:24:20 +0000 (0:00:01.130) 0:07:19.144 ********* 2025-07-12 15:24:32.724408 | orchestrator | ok: [testbed-manager] 2025-07-12 15:24:32.724418 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:24:32.724428 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:24:32.724439 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:24:32.724449 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:24:32.724460 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:24:32.724470 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:24:32.724480 | orchestrator | 2025-07-12 15:24:32.724491 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-07-12 15:24:32.724502 | orchestrator | Saturday 12 July 2025 15:24:21 +0000 (0:00:00.966) 0:07:20.110 ********* 2025-07-12 15:24:32.724513 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-12 15:24:32.724525 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-12 15:24:32.724536 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-12 15:24:32.724547 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-12 15:24:32.724558 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-12 15:24:32.724568 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-12 15:24:32.724579 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-12 15:24:32.724589 | orchestrator | 2025-07-12 15:24:32.724600 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-07-12 15:24:32.724611 | orchestrator | Saturday 12 July 2025 15:24:23 +0000 (0:00:01.640) 0:07:21.751 ********* 2025-07-12 15:24:32.724622 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:24:32.724633 | orchestrator | 2025-07-12 15:24:32.724644 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-07-12 15:24:32.724655 | orchestrator | Saturday 12 July 2025 15:24:24 +0000 (0:00:00.774) 0:07:22.525 ********* 2025-07-12 15:24:32.724665 | orchestrator | changed: [testbed-manager] 2025-07-12 15:24:32.724676 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:24:32.724686 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:24:32.724697 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:24:32.724707 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:24:32.724721 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:24:32.724739 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:24:32.724759 | orchestrator | 2025-07-12 15:24:32.724777 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-07-12 15:24:32.724801 | orchestrator | Saturday 12 July 2025 15:24:32 +0000 (0:00:08.628) 0:07:31.154 ********* 2025-07-12 15:24:49.175498 | orchestrator | ok: [testbed-manager] 2025-07-12 15:24:49.175673 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:24:49.175698 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:24:49.175739 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:24:49.175751 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:24:49.175762 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:24:49.175772 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:24:49.175784 | orchestrator | 2025-07-12 15:24:49.175796 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-07-12 15:24:49.175809 | orchestrator | Saturday 12 July 2025 15:24:34 +0000 (0:00:02.048) 0:07:33.203 ********* 2025-07-12 15:24:49.175820 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:24:49.175878 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:24:49.175891 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:24:49.175901 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:24:49.175912 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:24:49.175922 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:24:49.175933 | orchestrator | 2025-07-12 15:24:49.175944 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-07-12 15:24:49.175955 | orchestrator | Saturday 12 July 2025 15:24:36 +0000 (0:00:01.518) 0:07:34.721 ********* 2025-07-12 15:24:49.175966 | orchestrator | changed: [testbed-manager] 2025-07-12 15:24:49.175978 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:24:49.175988 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:24:49.175999 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:24:49.176010 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:24:49.176023 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:24:49.176035 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:24:49.176047 | orchestrator | 2025-07-12 15:24:49.176060 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-07-12 15:24:49.176071 | orchestrator | 2025-07-12 15:24:49.176122 | orchestrator | TASK [Include hardening role] ************************************************** 2025-07-12 15:24:49.176136 | orchestrator | Saturday 12 July 2025 15:24:37 +0000 (0:00:01.474) 0:07:36.196 ********* 2025-07-12 15:24:49.176148 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:24:49.176160 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:24:49.176172 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:24:49.176184 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:24:49.176196 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:24:49.176207 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:24:49.176218 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:24:49.176230 | orchestrator | 2025-07-12 15:24:49.176243 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-07-12 15:24:49.176255 | orchestrator | 2025-07-12 15:24:49.176267 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-07-12 15:24:49.176279 | orchestrator | Saturday 12 July 2025 15:24:38 +0000 (0:00:00.525) 0:07:36.722 ********* 2025-07-12 15:24:49.176291 | orchestrator | changed: [testbed-manager] 2025-07-12 15:24:49.176303 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:24:49.176314 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:24:49.176327 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:24:49.176339 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:24:49.176351 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:24:49.176363 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:24:49.176374 | orchestrator | 2025-07-12 15:24:49.176386 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-07-12 15:24:49.176398 | orchestrator | Saturday 12 July 2025 15:24:39 +0000 (0:00:01.300) 0:07:38.022 ********* 2025-07-12 15:24:49.176409 | orchestrator | ok: [testbed-manager] 2025-07-12 15:24:49.176419 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:24:49.176430 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:24:49.176441 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:24:49.176451 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:24:49.176462 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:24:49.176472 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:24:49.176483 | orchestrator | 2025-07-12 15:24:49.176494 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-07-12 15:24:49.176513 | orchestrator | Saturday 12 July 2025 15:24:41 +0000 (0:00:01.437) 0:07:39.460 ********* 2025-07-12 15:24:49.176524 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:24:49.176534 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:24:49.176545 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:24:49.176555 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:24:49.176566 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:24:49.176576 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:24:49.176587 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:24:49.176597 | orchestrator | 2025-07-12 15:24:49.176608 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-07-12 15:24:49.176618 | orchestrator | Saturday 12 July 2025 15:24:41 +0000 (0:00:00.981) 0:07:40.442 ********* 2025-07-12 15:24:49.176629 | orchestrator | changed: [testbed-manager] 2025-07-12 15:24:49.176639 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:24:49.176650 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:24:49.176660 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:24:49.176670 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:24:49.176681 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:24:49.176691 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:24:49.176702 | orchestrator | 2025-07-12 15:24:49.176712 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-07-12 15:24:49.176723 | orchestrator | 2025-07-12 15:24:49.176733 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-07-12 15:24:49.176744 | orchestrator | Saturday 12 July 2025 15:24:43 +0000 (0:00:01.245) 0:07:41.687 ********* 2025-07-12 15:24:49.176755 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:24:49.176767 | orchestrator | 2025-07-12 15:24:49.176778 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-07-12 15:24:49.176788 | orchestrator | Saturday 12 July 2025 15:24:44 +0000 (0:00:00.988) 0:07:42.676 ********* 2025-07-12 15:24:49.176799 | orchestrator | ok: [testbed-manager] 2025-07-12 15:24:49.176815 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:24:49.176825 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:24:49.176836 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:24:49.176847 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:24:49.176857 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:24:49.176868 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:24:49.176878 | orchestrator | 2025-07-12 15:24:49.176907 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-07-12 15:24:49.176919 | orchestrator | Saturday 12 July 2025 15:24:45 +0000 (0:00:00.813) 0:07:43.490 ********* 2025-07-12 15:24:49.176929 | orchestrator | changed: [testbed-manager] 2025-07-12 15:24:49.176940 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:24:49.176951 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:24:49.176961 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:24:49.176971 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:24:49.176982 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:24:49.176992 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:24:49.177003 | orchestrator | 2025-07-12 15:24:49.177013 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-07-12 15:24:49.177024 | orchestrator | Saturday 12 July 2025 15:24:46 +0000 (0:00:01.143) 0:07:44.633 ********* 2025-07-12 15:24:49.177035 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:24:49.177046 | orchestrator | 2025-07-12 15:24:49.177056 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-07-12 15:24:49.177067 | orchestrator | Saturday 12 July 2025 15:24:47 +0000 (0:00:00.956) 0:07:45.590 ********* 2025-07-12 15:24:49.177078 | orchestrator | ok: [testbed-manager] 2025-07-12 15:24:49.177136 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:24:49.177167 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:24:49.177185 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:24:49.177196 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:24:49.177206 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:24:49.177217 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:24:49.177227 | orchestrator | 2025-07-12 15:24:49.177238 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-07-12 15:24:49.177249 | orchestrator | Saturday 12 July 2025 15:24:48 +0000 (0:00:00.892) 0:07:46.482 ********* 2025-07-12 15:24:49.177259 | orchestrator | changed: [testbed-manager] 2025-07-12 15:24:49.177270 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:24:49.177281 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:24:49.177291 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:24:49.177302 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:24:49.177312 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:24:49.177323 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:24:49.177333 | orchestrator | 2025-07-12 15:24:49.177344 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:24:49.177356 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-07-12 15:24:49.177367 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-07-12 15:24:49.177379 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-12 15:24:49.177390 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-12 15:24:49.177400 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-12 15:24:49.177411 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-12 15:24:49.177422 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-12 15:24:49.177433 | orchestrator | 2025-07-12 15:24:49.177443 | orchestrator | 2025-07-12 15:24:49.177454 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:24:49.177465 | orchestrator | Saturday 12 July 2025 15:24:49 +0000 (0:00:01.117) 0:07:47.600 ********* 2025-07-12 15:24:49.177475 | orchestrator | =============================================================================== 2025-07-12 15:24:49.177486 | orchestrator | osism.commons.packages : Install required packages --------------------- 76.06s 2025-07-12 15:24:49.177497 | orchestrator | osism.commons.packages : Download required packages -------------------- 37.69s 2025-07-12 15:24:49.177507 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.34s 2025-07-12 15:24:49.177518 | orchestrator | osism.commons.repository : Update package cache ------------------------ 14.52s 2025-07-12 15:24:49.177528 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.14s 2025-07-12 15:24:49.177540 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 10.93s 2025-07-12 15:24:49.177550 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.66s 2025-07-12 15:24:49.177561 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.87s 2025-07-12 15:24:49.177571 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.63s 2025-07-12 15:24:49.177582 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.60s 2025-07-12 15:24:49.177593 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.12s 2025-07-12 15:24:49.177622 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.88s 2025-07-12 15:24:49.177639 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.84s 2025-07-12 15:24:49.177656 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.71s 2025-07-12 15:24:49.177685 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.57s 2025-07-12 15:24:49.599022 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.25s 2025-07-12 15:24:49.599149 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 5.99s 2025-07-12 15:24:49.599164 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.75s 2025-07-12 15:24:49.599175 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.73s 2025-07-12 15:24:49.599186 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.71s 2025-07-12 15:24:49.855862 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-07-12 15:24:49.855952 | orchestrator | + osism apply network 2025-07-12 15:25:02.134792 | orchestrator | 2025-07-12 15:25:02 | INFO  | Task ab141a9f-56d8-4001-a977-e43e63d85d5e (network) was prepared for execution. 2025-07-12 15:25:02.134930 | orchestrator | 2025-07-12 15:25:02 | INFO  | It takes a moment until task ab141a9f-56d8-4001-a977-e43e63d85d5e (network) has been started and output is visible here. 2025-07-12 15:25:29.622270 | orchestrator | 2025-07-12 15:25:29.622410 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-07-12 15:25:29.622465 | orchestrator | 2025-07-12 15:25:29.622479 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-07-12 15:25:29.622491 | orchestrator | Saturday 12 July 2025 15:25:06 +0000 (0:00:00.261) 0:00:00.261 ********* 2025-07-12 15:25:29.622502 | orchestrator | ok: [testbed-manager] 2025-07-12 15:25:29.622514 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:25:29.622525 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:25:29.622536 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:25:29.622547 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:25:29.622558 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:25:29.622568 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:25:29.622579 | orchestrator | 2025-07-12 15:25:29.622590 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-07-12 15:25:29.622601 | orchestrator | Saturday 12 July 2025 15:25:06 +0000 (0:00:00.707) 0:00:00.969 ********* 2025-07-12 15:25:29.622614 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:25:29.622628 | orchestrator | 2025-07-12 15:25:29.622640 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-07-12 15:25:29.622651 | orchestrator | Saturday 12 July 2025 15:25:08 +0000 (0:00:01.170) 0:00:02.139 ********* 2025-07-12 15:25:29.622662 | orchestrator | ok: [testbed-manager] 2025-07-12 15:25:29.622672 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:25:29.622683 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:25:29.622694 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:25:29.622704 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:25:29.622715 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:25:29.622725 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:25:29.622736 | orchestrator | 2025-07-12 15:25:29.622747 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-07-12 15:25:29.622758 | orchestrator | Saturday 12 July 2025 15:25:10 +0000 (0:00:02.113) 0:00:04.252 ********* 2025-07-12 15:25:29.622769 | orchestrator | ok: [testbed-manager] 2025-07-12 15:25:29.622779 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:25:29.622790 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:25:29.622801 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:25:29.622811 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:25:29.622821 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:25:29.622863 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:25:29.622875 | orchestrator | 2025-07-12 15:25:29.622886 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-07-12 15:25:29.622896 | orchestrator | Saturday 12 July 2025 15:25:11 +0000 (0:00:01.582) 0:00:05.834 ********* 2025-07-12 15:25:29.622907 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-07-12 15:25:29.622919 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-07-12 15:25:29.622929 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-07-12 15:25:29.622940 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-07-12 15:25:29.622950 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-07-12 15:25:29.622961 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-07-12 15:25:29.622972 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-07-12 15:25:29.622982 | orchestrator | 2025-07-12 15:25:29.622993 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-07-12 15:25:29.623003 | orchestrator | Saturday 12 July 2025 15:25:12 +0000 (0:00:00.968) 0:00:06.802 ********* 2025-07-12 15:25:29.623014 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 15:25:29.623059 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-12 15:25:29.623071 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 15:25:29.623081 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-07-12 15:25:29.623092 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-12 15:25:29.623102 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-07-12 15:25:29.623113 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-12 15:25:29.623124 | orchestrator | 2025-07-12 15:25:29.623134 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-07-12 15:25:29.623145 | orchestrator | Saturday 12 July 2025 15:25:15 +0000 (0:00:02.834) 0:00:09.636 ********* 2025-07-12 15:25:29.623156 | orchestrator | changed: [testbed-manager] 2025-07-12 15:25:29.623166 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:25:29.623177 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:25:29.623187 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:25:29.623212 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:25:29.623223 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:25:29.623234 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:25:29.623245 | orchestrator | 2025-07-12 15:25:29.623255 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-07-12 15:25:29.623266 | orchestrator | Saturday 12 July 2025 15:25:17 +0000 (0:00:01.554) 0:00:11.191 ********* 2025-07-12 15:25:29.623277 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 15:25:29.623287 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-07-12 15:25:29.623298 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 15:25:29.623309 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-12 15:25:29.623319 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-12 15:25:29.623330 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-12 15:25:29.623340 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-07-12 15:25:29.623351 | orchestrator | 2025-07-12 15:25:29.623361 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-07-12 15:25:29.623372 | orchestrator | Saturday 12 July 2025 15:25:19 +0000 (0:00:01.893) 0:00:13.085 ********* 2025-07-12 15:25:29.623383 | orchestrator | ok: [testbed-manager] 2025-07-12 15:25:29.623393 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:25:29.623404 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:25:29.623415 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:25:29.623425 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:25:29.623436 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:25:29.623446 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:25:29.623457 | orchestrator | 2025-07-12 15:25:29.623467 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-07-12 15:25:29.623499 | orchestrator | Saturday 12 July 2025 15:25:20 +0000 (0:00:01.064) 0:00:14.150 ********* 2025-07-12 15:25:29.623519 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:25:29.623530 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:25:29.623540 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:25:29.623551 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:25:29.623562 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:25:29.623572 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:25:29.623582 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:25:29.623593 | orchestrator | 2025-07-12 15:25:29.623604 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-07-12 15:25:29.623614 | orchestrator | Saturday 12 July 2025 15:25:20 +0000 (0:00:00.618) 0:00:14.769 ********* 2025-07-12 15:25:29.623625 | orchestrator | ok: [testbed-manager] 2025-07-12 15:25:29.623636 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:25:29.623646 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:25:29.623657 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:25:29.623667 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:25:29.623677 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:25:29.623688 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:25:29.623698 | orchestrator | 2025-07-12 15:25:29.623709 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-07-12 15:25:29.623720 | orchestrator | Saturday 12 July 2025 15:25:22 +0000 (0:00:02.072) 0:00:16.841 ********* 2025-07-12 15:25:29.623730 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:25:29.623741 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:25:29.623752 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:25:29.623762 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:25:29.623772 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:25:29.623783 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:25:29.623794 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-07-12 15:25:29.623805 | orchestrator | 2025-07-12 15:25:29.623816 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-07-12 15:25:29.623827 | orchestrator | Saturday 12 July 2025 15:25:23 +0000 (0:00:00.845) 0:00:17.686 ********* 2025-07-12 15:25:29.623838 | orchestrator | ok: [testbed-manager] 2025-07-12 15:25:29.623848 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:25:29.623859 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:25:29.623869 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:25:29.623880 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:25:29.623890 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:25:29.623900 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:25:29.623911 | orchestrator | 2025-07-12 15:25:29.623922 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-07-12 15:25:29.623932 | orchestrator | Saturday 12 July 2025 15:25:25 +0000 (0:00:01.661) 0:00:19.348 ********* 2025-07-12 15:25:29.623943 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:25:29.623956 | orchestrator | 2025-07-12 15:25:29.623966 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-07-12 15:25:29.623977 | orchestrator | Saturday 12 July 2025 15:25:26 +0000 (0:00:01.259) 0:00:20.607 ********* 2025-07-12 15:25:29.623988 | orchestrator | ok: [testbed-manager] 2025-07-12 15:25:29.623998 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:25:29.624009 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:25:29.624039 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:25:29.624050 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:25:29.624061 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:25:29.624072 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:25:29.624082 | orchestrator | 2025-07-12 15:25:29.624093 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-07-12 15:25:29.624104 | orchestrator | Saturday 12 July 2025 15:25:27 +0000 (0:00:00.974) 0:00:21.582 ********* 2025-07-12 15:25:29.624123 | orchestrator | ok: [testbed-manager] 2025-07-12 15:25:29.624133 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:25:29.624144 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:25:29.624154 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:25:29.624165 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:25:29.624175 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:25:29.624186 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:25:29.624196 | orchestrator | 2025-07-12 15:25:29.624207 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-07-12 15:25:29.624218 | orchestrator | Saturday 12 July 2025 15:25:28 +0000 (0:00:00.850) 0:00:22.432 ********* 2025-07-12 15:25:29.624234 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-12 15:25:29.624245 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-07-12 15:25:29.624255 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-12 15:25:29.624266 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-07-12 15:25:29.624359 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-12 15:25:29.624371 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-07-12 15:25:29.624382 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-12 15:25:29.624392 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-07-12 15:25:29.624403 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-12 15:25:29.624414 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-07-12 15:25:29.624424 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-12 15:25:29.624435 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-07-12 15:25:29.624446 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-12 15:25:29.624457 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-07-12 15:25:29.624467 | orchestrator | 2025-07-12 15:25:29.624489 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-07-12 15:25:44.841453 | orchestrator | Saturday 12 July 2025 15:25:29 +0000 (0:00:01.176) 0:00:23.609 ********* 2025-07-12 15:25:44.841554 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:25:44.841570 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:25:44.841583 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:25:44.841594 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:25:44.841605 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:25:44.841616 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:25:44.841627 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:25:44.841640 | orchestrator | 2025-07-12 15:25:44.841652 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-07-12 15:25:44.841664 | orchestrator | Saturday 12 July 2025 15:25:30 +0000 (0:00:00.615) 0:00:24.225 ********* 2025-07-12 15:25:44.841677 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-3, testbed-node-2, testbed-node-5, testbed-node-4 2025-07-12 15:25:44.841691 | orchestrator | 2025-07-12 15:25:44.841702 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-07-12 15:25:44.841714 | orchestrator | Saturday 12 July 2025 15:25:34 +0000 (0:00:04.457) 0:00:28.683 ********* 2025-07-12 15:25:44.841727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-07-12 15:25:44.841740 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-07-12 15:25:44.841772 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-07-12 15:25:44.841785 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-07-12 15:25:44.841798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-07-12 15:25:44.841810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-07-12 15:25:44.841822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-07-12 15:25:44.841834 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-07-12 15:25:44.841846 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-07-12 15:25:44.841858 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-07-12 15:25:44.841881 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-07-12 15:25:44.841911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-07-12 15:25:44.841924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-07-12 15:25:44.841936 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-07-12 15:25:44.841948 | orchestrator | 2025-07-12 15:25:44.841960 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-07-12 15:25:44.841971 | orchestrator | Saturday 12 July 2025 15:25:39 +0000 (0:00:05.112) 0:00:33.796 ********* 2025-07-12 15:25:44.841982 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-07-12 15:25:44.842072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-07-12 15:25:44.842088 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-07-12 15:25:44.842101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-07-12 15:25:44.842113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-07-12 15:25:44.842155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-07-12 15:25:44.842169 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-07-12 15:25:44.842182 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-07-12 15:25:44.842210 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-07-12 15:25:44.842223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-07-12 15:25:44.842235 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-07-12 15:25:44.842248 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-07-12 15:25:44.842269 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-07-12 15:25:50.401179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-07-12 15:25:50.401319 | orchestrator | 2025-07-12 15:25:50.401337 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-07-12 15:25:50.401349 | orchestrator | Saturday 12 July 2025 15:25:44 +0000 (0:00:05.041) 0:00:38.837 ********* 2025-07-12 15:25:50.401363 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:25:50.401374 | orchestrator | 2025-07-12 15:25:50.401385 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-07-12 15:25:50.401396 | orchestrator | Saturday 12 July 2025 15:25:45 +0000 (0:00:01.095) 0:00:39.933 ********* 2025-07-12 15:25:50.401407 | orchestrator | ok: [testbed-manager] 2025-07-12 15:25:50.401420 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:25:50.401430 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:25:50.401441 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:25:50.401451 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:25:50.401462 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:25:50.401472 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:25:50.401483 | orchestrator | 2025-07-12 15:25:50.401494 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-07-12 15:25:50.401505 | orchestrator | Saturday 12 July 2025 15:25:46 +0000 (0:00:01.019) 0:00:40.953 ********* 2025-07-12 15:25:50.401516 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-12 15:25:50.401527 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-12 15:25:50.401538 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-12 15:25:50.401549 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-12 15:25:50.401559 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-12 15:25:50.401570 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-12 15:25:50.401580 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-12 15:25:50.401591 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-12 15:25:50.401602 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:25:50.401613 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-12 15:25:50.401624 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-12 15:25:50.401634 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-12 15:25:50.401645 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-12 15:25:50.401655 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:25:50.401666 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-12 15:25:50.401677 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-12 15:25:50.401687 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-12 15:25:50.401698 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-12 15:25:50.401710 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:25:50.401737 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-12 15:25:50.401749 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-12 15:25:50.401761 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-12 15:25:50.401773 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-12 15:25:50.401786 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:25:50.401798 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-12 15:25:50.401818 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-12 15:25:50.401831 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-12 15:25:50.401843 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-12 15:25:50.401855 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:25:50.401867 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:25:50.401879 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-12 15:25:50.401892 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-12 15:25:50.401904 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-12 15:25:50.401915 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-12 15:25:50.401927 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:25:50.401939 | orchestrator | 2025-07-12 15:25:50.401952 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-07-12 15:25:50.401981 | orchestrator | Saturday 12 July 2025 15:25:48 +0000 (0:00:01.784) 0:00:42.738 ********* 2025-07-12 15:25:50.402074 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:25:50.402090 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:25:50.402103 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:25:50.402114 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:25:50.402124 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:25:50.402135 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:25:50.402145 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:25:50.402156 | orchestrator | 2025-07-12 15:25:50.402167 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-07-12 15:25:50.402177 | orchestrator | Saturday 12 July 2025 15:25:49 +0000 (0:00:00.615) 0:00:43.354 ********* 2025-07-12 15:25:50.402188 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:25:50.402198 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:25:50.402209 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:25:50.402220 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:25:50.402230 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:25:50.402241 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:25:50.402251 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:25:50.402262 | orchestrator | 2025-07-12 15:25:50.402272 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:25:50.402284 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 15:25:50.402296 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 15:25:50.402307 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 15:25:50.402318 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 15:25:50.402329 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 15:25:50.402340 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 15:25:50.402351 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 15:25:50.402362 | orchestrator | 2025-07-12 15:25:50.402373 | orchestrator | 2025-07-12 15:25:50.402384 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:25:50.402406 | orchestrator | Saturday 12 July 2025 15:25:50 +0000 (0:00:00.698) 0:00:44.052 ********* 2025-07-12 15:25:50.402416 | orchestrator | =============================================================================== 2025-07-12 15:25:50.402427 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.11s 2025-07-12 15:25:50.402438 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.04s 2025-07-12 15:25:50.402448 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.46s 2025-07-12 15:25:50.402459 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 2.83s 2025-07-12 15:25:50.402470 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.11s 2025-07-12 15:25:50.402481 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.07s 2025-07-12 15:25:50.402491 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.89s 2025-07-12 15:25:50.402502 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.78s 2025-07-12 15:25:50.402519 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.66s 2025-07-12 15:25:50.402530 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.58s 2025-07-12 15:25:50.402541 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.55s 2025-07-12 15:25:50.402551 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.26s 2025-07-12 15:25:50.402562 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.18s 2025-07-12 15:25:50.402573 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.17s 2025-07-12 15:25:50.402583 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.10s 2025-07-12 15:25:50.402594 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.06s 2025-07-12 15:25:50.402605 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.02s 2025-07-12 15:25:50.402615 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.97s 2025-07-12 15:25:50.402626 | orchestrator | osism.commons.network : Create required directories --------------------- 0.97s 2025-07-12 15:25:50.402637 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.85s 2025-07-12 15:25:50.661931 | orchestrator | + osism apply wireguard 2025-07-12 15:26:02.600819 | orchestrator | 2025-07-12 15:26:02 | INFO  | Task 544ad15e-41f0-4015-8e6a-042ec46eb394 (wireguard) was prepared for execution. 2025-07-12 15:26:02.600932 | orchestrator | 2025-07-12 15:26:02 | INFO  | It takes a moment until task 544ad15e-41f0-4015-8e6a-042ec46eb394 (wireguard) has been started and output is visible here. 2025-07-12 15:26:19.878099 | orchestrator | 2025-07-12 15:26:19.878223 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-07-12 15:26:19.878241 | orchestrator | 2025-07-12 15:26:19.878254 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-07-12 15:26:19.878266 | orchestrator | Saturday 12 July 2025 15:26:06 +0000 (0:00:00.167) 0:00:00.167 ********* 2025-07-12 15:26:19.878278 | orchestrator | ok: [testbed-manager] 2025-07-12 15:26:19.878290 | orchestrator | 2025-07-12 15:26:19.878302 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-07-12 15:26:19.878313 | orchestrator | Saturday 12 July 2025 15:26:07 +0000 (0:00:01.159) 0:00:01.327 ********* 2025-07-12 15:26:19.878324 | orchestrator | changed: [testbed-manager] 2025-07-12 15:26:19.878337 | orchestrator | 2025-07-12 15:26:19.878348 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-07-12 15:26:19.878360 | orchestrator | Saturday 12 July 2025 15:26:12 +0000 (0:00:05.114) 0:00:06.441 ********* 2025-07-12 15:26:19.878371 | orchestrator | changed: [testbed-manager] 2025-07-12 15:26:19.878382 | orchestrator | 2025-07-12 15:26:19.878394 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-07-12 15:26:19.878430 | orchestrator | Saturday 12 July 2025 15:26:13 +0000 (0:00:00.526) 0:00:06.968 ********* 2025-07-12 15:26:19.878442 | orchestrator | changed: [testbed-manager] 2025-07-12 15:26:19.878453 | orchestrator | 2025-07-12 15:26:19.878464 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-07-12 15:26:19.878476 | orchestrator | Saturday 12 July 2025 15:26:13 +0000 (0:00:00.392) 0:00:07.361 ********* 2025-07-12 15:26:19.878487 | orchestrator | ok: [testbed-manager] 2025-07-12 15:26:19.878498 | orchestrator | 2025-07-12 15:26:19.878509 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-07-12 15:26:19.878520 | orchestrator | Saturday 12 July 2025 15:26:14 +0000 (0:00:00.531) 0:00:07.892 ********* 2025-07-12 15:26:19.878532 | orchestrator | ok: [testbed-manager] 2025-07-12 15:26:19.878545 | orchestrator | 2025-07-12 15:26:19.878557 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-07-12 15:26:19.878570 | orchestrator | Saturday 12 July 2025 15:26:14 +0000 (0:00:00.515) 0:00:08.408 ********* 2025-07-12 15:26:19.878582 | orchestrator | ok: [testbed-manager] 2025-07-12 15:26:19.878595 | orchestrator | 2025-07-12 15:26:19.878607 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-07-12 15:26:19.878619 | orchestrator | Saturday 12 July 2025 15:26:15 +0000 (0:00:00.408) 0:00:08.817 ********* 2025-07-12 15:26:19.878632 | orchestrator | changed: [testbed-manager] 2025-07-12 15:26:19.878645 | orchestrator | 2025-07-12 15:26:19.878658 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-07-12 15:26:19.878670 | orchestrator | Saturday 12 July 2025 15:26:16 +0000 (0:00:01.129) 0:00:09.946 ********* 2025-07-12 15:26:19.878682 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-12 15:26:19.878695 | orchestrator | changed: [testbed-manager] 2025-07-12 15:26:19.878707 | orchestrator | 2025-07-12 15:26:19.878719 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-07-12 15:26:19.878732 | orchestrator | Saturday 12 July 2025 15:26:17 +0000 (0:00:00.895) 0:00:10.842 ********* 2025-07-12 15:26:19.878744 | orchestrator | changed: [testbed-manager] 2025-07-12 15:26:19.878756 | orchestrator | 2025-07-12 15:26:19.878768 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-07-12 15:26:19.878781 | orchestrator | Saturday 12 July 2025 15:26:18 +0000 (0:00:01.642) 0:00:12.484 ********* 2025-07-12 15:26:19.878793 | orchestrator | changed: [testbed-manager] 2025-07-12 15:26:19.878806 | orchestrator | 2025-07-12 15:26:19.878819 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:26:19.878832 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:26:19.878846 | orchestrator | 2025-07-12 15:26:19.878859 | orchestrator | 2025-07-12 15:26:19.878873 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:26:19.878890 | orchestrator | Saturday 12 July 2025 15:26:19 +0000 (0:00:00.881) 0:00:13.366 ********* 2025-07-12 15:26:19.878924 | orchestrator | =============================================================================== 2025-07-12 15:26:19.878943 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.11s 2025-07-12 15:26:19.878988 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.64s 2025-07-12 15:26:19.879006 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.16s 2025-07-12 15:26:19.879023 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.13s 2025-07-12 15:26:19.879041 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.90s 2025-07-12 15:26:19.879058 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.88s 2025-07-12 15:26:19.879078 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.53s 2025-07-12 15:26:19.879096 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.53s 2025-07-12 15:26:19.879126 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.52s 2025-07-12 15:26:19.879137 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.41s 2025-07-12 15:26:19.879148 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.39s 2025-07-12 15:26:20.130522 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-07-12 15:26:20.166412 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-07-12 15:26:20.166501 | orchestrator | Dload Upload Total Spent Left Speed 2025-07-12 15:26:20.259478 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 161 0 --:--:-- --:--:-- --:--:-- 163 2025-07-12 15:26:20.276083 | orchestrator | + osism apply --environment custom workarounds 2025-07-12 15:26:22.017758 | orchestrator | 2025-07-12 15:26:22 | INFO  | Trying to run play workarounds in environment custom 2025-07-12 15:26:32.098476 | orchestrator | 2025-07-12 15:26:32 | INFO  | Task 9b3434ab-5dbc-4a05-ad80-fea15776146b (workarounds) was prepared for execution. 2025-07-12 15:26:32.099572 | orchestrator | 2025-07-12 15:26:32 | INFO  | It takes a moment until task 9b3434ab-5dbc-4a05-ad80-fea15776146b (workarounds) has been started and output is visible here. 2025-07-12 15:26:56.445522 | orchestrator | 2025-07-12 15:26:56.445630 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 15:26:56.445649 | orchestrator | 2025-07-12 15:26:56.445662 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-07-12 15:26:56.445674 | orchestrator | Saturday 12 July 2025 15:26:35 +0000 (0:00:00.107) 0:00:00.107 ********* 2025-07-12 15:26:56.445685 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-07-12 15:26:56.445697 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-07-12 15:26:56.445708 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-07-12 15:26:56.445718 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-07-12 15:26:56.445729 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-07-12 15:26:56.445740 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-07-12 15:26:56.445751 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-07-12 15:26:56.445762 | orchestrator | 2025-07-12 15:26:56.445773 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-07-12 15:26:56.445784 | orchestrator | 2025-07-12 15:26:56.445795 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-07-12 15:26:56.445806 | orchestrator | Saturday 12 July 2025 15:26:36 +0000 (0:00:00.626) 0:00:00.734 ********* 2025-07-12 15:26:56.445817 | orchestrator | ok: [testbed-manager] 2025-07-12 15:26:56.445829 | orchestrator | 2025-07-12 15:26:56.445840 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-07-12 15:26:56.445851 | orchestrator | 2025-07-12 15:26:56.445862 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-07-12 15:26:56.445872 | orchestrator | Saturday 12 July 2025 15:26:38 +0000 (0:00:02.042) 0:00:02.776 ********* 2025-07-12 15:26:56.445883 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:26:56.445894 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:26:56.445931 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:26:56.445942 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:26:56.445953 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:26:56.445964 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:26:56.445975 | orchestrator | 2025-07-12 15:26:56.445986 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-07-12 15:26:56.445997 | orchestrator | 2025-07-12 15:26:56.446008 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-07-12 15:26:56.446087 | orchestrator | Saturday 12 July 2025 15:26:40 +0000 (0:00:01.851) 0:00:04.628 ********* 2025-07-12 15:26:56.446124 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-12 15:26:56.446137 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-12 15:26:56.446150 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-12 15:26:56.446162 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-12 15:26:56.446173 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-12 15:26:56.446194 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-12 15:26:56.446206 | orchestrator | 2025-07-12 15:26:56.446219 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-07-12 15:26:56.446231 | orchestrator | Saturday 12 July 2025 15:26:41 +0000 (0:00:01.412) 0:00:06.040 ********* 2025-07-12 15:26:56.446243 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:26:56.446255 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:26:56.446268 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:26:56.446279 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:26:56.446291 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:26:56.446303 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:26:56.446315 | orchestrator | 2025-07-12 15:26:56.446327 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-07-12 15:26:56.446339 | orchestrator | Saturday 12 July 2025 15:26:45 +0000 (0:00:03.719) 0:00:09.760 ********* 2025-07-12 15:26:56.446351 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:26:56.446363 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:26:56.446374 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:26:56.446386 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:26:56.446398 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:26:56.446410 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:26:56.446423 | orchestrator | 2025-07-12 15:26:56.446435 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-07-12 15:26:56.446446 | orchestrator | 2025-07-12 15:26:56.446457 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-07-12 15:26:56.446468 | orchestrator | Saturday 12 July 2025 15:26:46 +0000 (0:00:00.696) 0:00:10.457 ********* 2025-07-12 15:26:56.446479 | orchestrator | changed: [testbed-manager] 2025-07-12 15:26:56.446490 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:26:56.446501 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:26:56.446512 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:26:56.446522 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:26:56.446533 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:26:56.446544 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:26:56.446554 | orchestrator | 2025-07-12 15:26:56.446565 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-07-12 15:26:56.446576 | orchestrator | Saturday 12 July 2025 15:26:47 +0000 (0:00:01.617) 0:00:12.074 ********* 2025-07-12 15:26:56.446586 | orchestrator | changed: [testbed-manager] 2025-07-12 15:26:56.446597 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:26:56.446607 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:26:56.446618 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:26:56.446629 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:26:56.446640 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:26:56.446668 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:26:56.446680 | orchestrator | 2025-07-12 15:26:56.446691 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-07-12 15:26:56.446702 | orchestrator | Saturday 12 July 2025 15:26:49 +0000 (0:00:01.729) 0:00:13.804 ********* 2025-07-12 15:26:56.446712 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:26:56.446723 | orchestrator | ok: [testbed-manager] 2025-07-12 15:26:56.446741 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:26:56.446752 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:26:56.446762 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:26:56.446773 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:26:56.446784 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:26:56.446794 | orchestrator | 2025-07-12 15:26:56.446805 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-07-12 15:26:56.446816 | orchestrator | Saturday 12 July 2025 15:26:51 +0000 (0:00:01.610) 0:00:15.415 ********* 2025-07-12 15:26:56.446827 | orchestrator | changed: [testbed-manager] 2025-07-12 15:26:56.446838 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:26:56.446848 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:26:56.446859 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:26:56.446870 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:26:56.446880 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:26:56.446891 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:26:56.446931 | orchestrator | 2025-07-12 15:26:56.446943 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-07-12 15:26:56.446954 | orchestrator | Saturday 12 July 2025 15:26:52 +0000 (0:00:01.875) 0:00:17.290 ********* 2025-07-12 15:26:56.446964 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:26:56.446975 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:26:56.446986 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:26:56.446996 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:26:56.447007 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:26:56.447017 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:26:56.447028 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:26:56.447039 | orchestrator | 2025-07-12 15:26:56.447049 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-07-12 15:26:56.447060 | orchestrator | 2025-07-12 15:26:56.447071 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-07-12 15:26:56.447082 | orchestrator | Saturday 12 July 2025 15:26:53 +0000 (0:00:00.674) 0:00:17.965 ********* 2025-07-12 15:26:56.447093 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:26:56.447104 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:26:56.447114 | orchestrator | ok: [testbed-manager] 2025-07-12 15:26:56.447125 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:26:56.447136 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:26:56.447147 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:26:56.447157 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:26:56.447168 | orchestrator | 2025-07-12 15:26:56.447179 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:26:56.447191 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 15:26:56.447203 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 15:26:56.447214 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 15:26:56.447230 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 15:26:56.447241 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 15:26:56.447252 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 15:26:56.447263 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 15:26:56.447274 | orchestrator | 2025-07-12 15:26:56.447285 | orchestrator | 2025-07-12 15:26:56.447303 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:26:56.447314 | orchestrator | Saturday 12 July 2025 15:26:56 +0000 (0:00:02.833) 0:00:20.799 ********* 2025-07-12 15:26:56.447325 | orchestrator | =============================================================================== 2025-07-12 15:26:56.447336 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.72s 2025-07-12 15:26:56.447347 | orchestrator | Install python3-docker -------------------------------------------------- 2.83s 2025-07-12 15:26:56.447357 | orchestrator | Apply netplan configuration --------------------------------------------- 2.04s 2025-07-12 15:26:56.447368 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.88s 2025-07-12 15:26:56.447379 | orchestrator | Apply netplan configuration --------------------------------------------- 1.85s 2025-07-12 15:26:56.447390 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.73s 2025-07-12 15:26:56.447400 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.62s 2025-07-12 15:26:56.447417 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.61s 2025-07-12 15:26:56.447435 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.41s 2025-07-12 15:26:56.447453 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.70s 2025-07-12 15:26:56.447486 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.68s 2025-07-12 15:26:56.447513 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.63s 2025-07-12 15:26:57.021322 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-07-12 15:27:08.936493 | orchestrator | 2025-07-12 15:27:08 | INFO  | Task f52315f2-bb4d-439a-8aed-f107597fea70 (reboot) was prepared for execution. 2025-07-12 15:27:08.936608 | orchestrator | 2025-07-12 15:27:08 | INFO  | It takes a moment until task f52315f2-bb4d-439a-8aed-f107597fea70 (reboot) has been started and output is visible here. 2025-07-12 15:27:18.082618 | orchestrator | 2025-07-12 15:27:18.082705 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-12 15:27:18.082719 | orchestrator | 2025-07-12 15:27:18.082731 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-12 15:27:18.082742 | orchestrator | Saturday 12 July 2025 15:27:12 +0000 (0:00:00.161) 0:00:00.161 ********* 2025-07-12 15:27:18.082753 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:27:18.082765 | orchestrator | 2025-07-12 15:27:18.082776 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-12 15:27:18.082787 | orchestrator | Saturday 12 July 2025 15:27:12 +0000 (0:00:00.082) 0:00:00.243 ********* 2025-07-12 15:27:18.082798 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:27:18.082809 | orchestrator | 2025-07-12 15:27:18.082820 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-12 15:27:18.082831 | orchestrator | Saturday 12 July 2025 15:27:13 +0000 (0:00:00.881) 0:00:01.124 ********* 2025-07-12 15:27:18.082841 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:27:18.082852 | orchestrator | 2025-07-12 15:27:18.082863 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-12 15:27:18.082912 | orchestrator | 2025-07-12 15:27:18.082924 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-12 15:27:18.082935 | orchestrator | Saturday 12 July 2025 15:27:13 +0000 (0:00:00.086) 0:00:01.211 ********* 2025-07-12 15:27:18.082946 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:27:18.082957 | orchestrator | 2025-07-12 15:27:18.082968 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-12 15:27:18.082978 | orchestrator | Saturday 12 July 2025 15:27:13 +0000 (0:00:00.079) 0:00:01.291 ********* 2025-07-12 15:27:18.082989 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:27:18.083000 | orchestrator | 2025-07-12 15:27:18.083011 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-12 15:27:18.083042 | orchestrator | Saturday 12 July 2025 15:27:14 +0000 (0:00:00.630) 0:00:01.922 ********* 2025-07-12 15:27:18.083054 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:27:18.083065 | orchestrator | 2025-07-12 15:27:18.083076 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-12 15:27:18.083086 | orchestrator | 2025-07-12 15:27:18.083097 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-12 15:27:18.083108 | orchestrator | Saturday 12 July 2025 15:27:14 +0000 (0:00:00.101) 0:00:02.023 ********* 2025-07-12 15:27:18.083119 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:27:18.083129 | orchestrator | 2025-07-12 15:27:18.083140 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-12 15:27:18.083151 | orchestrator | Saturday 12 July 2025 15:27:14 +0000 (0:00:00.151) 0:00:02.174 ********* 2025-07-12 15:27:18.083161 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:27:18.083172 | orchestrator | 2025-07-12 15:27:18.083185 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-12 15:27:18.083197 | orchestrator | Saturday 12 July 2025 15:27:15 +0000 (0:00:00.642) 0:00:02.817 ********* 2025-07-12 15:27:18.083210 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:27:18.083222 | orchestrator | 2025-07-12 15:27:18.083234 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-12 15:27:18.083247 | orchestrator | 2025-07-12 15:27:18.083259 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-12 15:27:18.083276 | orchestrator | Saturday 12 July 2025 15:27:15 +0000 (0:00:00.115) 0:00:02.932 ********* 2025-07-12 15:27:18.083289 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:27:18.083301 | orchestrator | 2025-07-12 15:27:18.083313 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-12 15:27:18.083325 | orchestrator | Saturday 12 July 2025 15:27:15 +0000 (0:00:00.088) 0:00:03.021 ********* 2025-07-12 15:27:18.083337 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:27:18.083349 | orchestrator | 2025-07-12 15:27:18.083362 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-12 15:27:18.083374 | orchestrator | Saturday 12 July 2025 15:27:16 +0000 (0:00:00.674) 0:00:03.696 ********* 2025-07-12 15:27:18.083387 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:27:18.083399 | orchestrator | 2025-07-12 15:27:18.083411 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-12 15:27:18.083423 | orchestrator | 2025-07-12 15:27:18.083435 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-12 15:27:18.083448 | orchestrator | Saturday 12 July 2025 15:27:16 +0000 (0:00:00.099) 0:00:03.795 ********* 2025-07-12 15:27:18.083460 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:27:18.083472 | orchestrator | 2025-07-12 15:27:18.083484 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-12 15:27:18.083497 | orchestrator | Saturday 12 July 2025 15:27:16 +0000 (0:00:00.085) 0:00:03.881 ********* 2025-07-12 15:27:18.083509 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:27:18.083521 | orchestrator | 2025-07-12 15:27:18.083533 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-12 15:27:18.083544 | orchestrator | Saturday 12 July 2025 15:27:17 +0000 (0:00:00.663) 0:00:04.545 ********* 2025-07-12 15:27:18.083555 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:27:18.083565 | orchestrator | 2025-07-12 15:27:18.083576 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-12 15:27:18.083587 | orchestrator | 2025-07-12 15:27:18.083598 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-12 15:27:18.083608 | orchestrator | Saturday 12 July 2025 15:27:17 +0000 (0:00:00.096) 0:00:04.641 ********* 2025-07-12 15:27:18.083619 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:27:18.083630 | orchestrator | 2025-07-12 15:27:18.083640 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-12 15:27:18.083651 | orchestrator | Saturday 12 July 2025 15:27:17 +0000 (0:00:00.084) 0:00:04.726 ********* 2025-07-12 15:27:18.083671 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:27:18.083682 | orchestrator | 2025-07-12 15:27:18.083692 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-12 15:27:18.083704 | orchestrator | Saturday 12 July 2025 15:27:17 +0000 (0:00:00.623) 0:00:05.349 ********* 2025-07-12 15:27:18.083730 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:27:18.083741 | orchestrator | 2025-07-12 15:27:18.083752 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:27:18.083780 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 15:27:18.083791 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 15:27:18.083802 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 15:27:18.083813 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 15:27:18.083824 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 15:27:18.083835 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 15:27:18.083846 | orchestrator | 2025-07-12 15:27:18.083856 | orchestrator | 2025-07-12 15:27:18.083867 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:27:18.083897 | orchestrator | Saturday 12 July 2025 15:27:17 +0000 (0:00:00.034) 0:00:05.384 ********* 2025-07-12 15:27:18.083908 | orchestrator | =============================================================================== 2025-07-12 15:27:18.083918 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.12s 2025-07-12 15:27:18.083929 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.57s 2025-07-12 15:27:18.083940 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.53s 2025-07-12 15:27:18.252035 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-07-12 15:27:30.221222 | orchestrator | 2025-07-12 15:27:30 | INFO  | Task 8cef95b2-7239-4ad5-a7e5-88bb7efeb71f (wait-for-connection) was prepared for execution. 2025-07-12 15:27:30.221350 | orchestrator | 2025-07-12 15:27:30 | INFO  | It takes a moment until task 8cef95b2-7239-4ad5-a7e5-88bb7efeb71f (wait-for-connection) has been started and output is visible here. 2025-07-12 15:27:46.001526 | orchestrator | 2025-07-12 15:27:46.001646 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-07-12 15:27:46.001663 | orchestrator | 2025-07-12 15:27:46.001676 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-07-12 15:27:46.001687 | orchestrator | Saturday 12 July 2025 15:27:34 +0000 (0:00:00.235) 0:00:00.235 ********* 2025-07-12 15:27:46.001699 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:27:46.001711 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:27:46.001722 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:27:46.001734 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:27:46.001745 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:27:46.001756 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:27:46.001767 | orchestrator | 2025-07-12 15:27:46.001778 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:27:46.001790 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:27:46.001803 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:27:46.001896 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:27:46.001911 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:27:46.001922 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:27:46.001933 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:27:46.001944 | orchestrator | 2025-07-12 15:27:46.001955 | orchestrator | 2025-07-12 15:27:46.001965 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:27:46.001977 | orchestrator | Saturday 12 July 2025 15:27:45 +0000 (0:00:11.545) 0:00:11.782 ********* 2025-07-12 15:27:46.001988 | orchestrator | =============================================================================== 2025-07-12 15:27:46.001998 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.55s 2025-07-12 15:27:46.250305 | orchestrator | + osism apply hddtemp 2025-07-12 15:27:58.056514 | orchestrator | 2025-07-12 15:27:58 | INFO  | Task e38c38f9-2b4f-4146-98ca-83e4ce163e95 (hddtemp) was prepared for execution. 2025-07-12 15:27:58.056615 | orchestrator | 2025-07-12 15:27:58 | INFO  | It takes a moment until task e38c38f9-2b4f-4146-98ca-83e4ce163e95 (hddtemp) has been started and output is visible here. 2025-07-12 15:28:23.884178 | orchestrator | 2025-07-12 15:28:23.884297 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-07-12 15:28:23.884314 | orchestrator | 2025-07-12 15:28:23.884327 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-07-12 15:28:23.884338 | orchestrator | Saturday 12 July 2025 15:28:01 +0000 (0:00:00.193) 0:00:00.194 ********* 2025-07-12 15:28:23.884350 | orchestrator | ok: [testbed-manager] 2025-07-12 15:28:23.884362 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:28:23.884373 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:28:23.884384 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:28:23.884395 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:28:23.884405 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:28:23.884417 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:28:23.884427 | orchestrator | 2025-07-12 15:28:23.884439 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-07-12 15:28:23.884449 | orchestrator | Saturday 12 July 2025 15:28:02 +0000 (0:00:00.503) 0:00:00.697 ********* 2025-07-12 15:28:23.884463 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:28:23.884477 | orchestrator | 2025-07-12 15:28:23.884488 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-07-12 15:28:23.884499 | orchestrator | Saturday 12 July 2025 15:28:03 +0000 (0:00:00.998) 0:00:01.695 ********* 2025-07-12 15:28:23.884510 | orchestrator | ok: [testbed-manager] 2025-07-12 15:28:23.884521 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:28:23.884532 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:28:23.884543 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:28:23.884553 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:28:23.884564 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:28:23.884575 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:28:23.884586 | orchestrator | 2025-07-12 15:28:23.884597 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-07-12 15:28:23.884608 | orchestrator | Saturday 12 July 2025 15:28:05 +0000 (0:00:01.915) 0:00:03.611 ********* 2025-07-12 15:28:23.884619 | orchestrator | changed: [testbed-manager] 2025-07-12 15:28:23.884630 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:28:23.884666 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:28:23.884678 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:28:23.884689 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:28:23.884702 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:28:23.884714 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:28:23.884726 | orchestrator | 2025-07-12 15:28:23.884738 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-07-12 15:28:23.884750 | orchestrator | Saturday 12 July 2025 15:28:06 +0000 (0:00:01.029) 0:00:04.641 ********* 2025-07-12 15:28:23.884762 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:28:23.884774 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:28:23.884786 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:28:23.884836 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:28:23.884853 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:28:23.884881 | orchestrator | ok: [testbed-manager] 2025-07-12 15:28:23.884894 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:28:23.884906 | orchestrator | 2025-07-12 15:28:23.884918 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-07-12 15:28:23.884930 | orchestrator | Saturday 12 July 2025 15:28:07 +0000 (0:00:01.008) 0:00:05.650 ********* 2025-07-12 15:28:23.884942 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:28:23.884955 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:28:23.884967 | orchestrator | changed: [testbed-manager] 2025-07-12 15:28:23.884979 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:28:23.884991 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:28:23.885003 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:28:23.885015 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:28:23.885027 | orchestrator | 2025-07-12 15:28:23.885040 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-07-12 15:28:23.885052 | orchestrator | Saturday 12 July 2025 15:28:08 +0000 (0:00:00.825) 0:00:06.475 ********* 2025-07-12 15:28:23.885062 | orchestrator | changed: [testbed-manager] 2025-07-12 15:28:23.885073 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:28:23.885084 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:28:23.885094 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:28:23.885105 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:28:23.885116 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:28:23.885126 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:28:23.885137 | orchestrator | 2025-07-12 15:28:23.885148 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-07-12 15:28:23.885158 | orchestrator | Saturday 12 July 2025 15:28:20 +0000 (0:00:12.261) 0:00:18.737 ********* 2025-07-12 15:28:23.885170 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:28:23.885181 | orchestrator | 2025-07-12 15:28:23.885192 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-07-12 15:28:23.885203 | orchestrator | Saturday 12 July 2025 15:28:21 +0000 (0:00:01.292) 0:00:20.029 ********* 2025-07-12 15:28:23.885213 | orchestrator | changed: [testbed-manager] 2025-07-12 15:28:23.885224 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:28:23.885235 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:28:23.885245 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:28:23.885256 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:28:23.885266 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:28:23.885277 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:28:23.885288 | orchestrator | 2025-07-12 15:28:23.885298 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:28:23.885310 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:28:23.885341 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 15:28:23.885364 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 15:28:23.885375 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 15:28:23.885386 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 15:28:23.885397 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 15:28:23.885408 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 15:28:23.885419 | orchestrator | 2025-07-12 15:28:23.885430 | orchestrator | 2025-07-12 15:28:23.885440 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:28:23.885451 | orchestrator | Saturday 12 July 2025 15:28:23 +0000 (0:00:01.838) 0:00:21.868 ********* 2025-07-12 15:28:23.885462 | orchestrator | =============================================================================== 2025-07-12 15:28:23.885473 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.26s 2025-07-12 15:28:23.885484 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.92s 2025-07-12 15:28:23.885494 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.84s 2025-07-12 15:28:23.885505 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.29s 2025-07-12 15:28:23.885516 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.03s 2025-07-12 15:28:23.885526 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.01s 2025-07-12 15:28:23.885537 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.00s 2025-07-12 15:28:23.885548 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.83s 2025-07-12 15:28:23.885558 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.50s 2025-07-12 15:28:24.144669 | orchestrator | ++ semver 9.2.0 7.1.1 2025-07-12 15:28:24.189997 | orchestrator | + [[ 1 -ge 0 ]] 2025-07-12 15:28:24.190158 | orchestrator | + sudo systemctl restart manager.service 2025-07-12 15:28:37.567908 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-07-12 15:28:37.568013 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-07-12 15:28:37.568024 | orchestrator | + local max_attempts=60 2025-07-12 15:28:37.568044 | orchestrator | + local name=ceph-ansible 2025-07-12 15:28:37.568049 | orchestrator | + local attempt_num=1 2025-07-12 15:28:37.568055 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 15:28:37.596938 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-12 15:28:37.596998 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 15:28:37.597003 | orchestrator | + sleep 5 2025-07-12 15:28:42.604744 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 15:28:42.627836 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-12 15:28:42.627858 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 15:28:42.627863 | orchestrator | + sleep 5 2025-07-12 15:28:47.630968 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 15:28:47.664340 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-12 15:28:47.664435 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 15:28:47.664456 | orchestrator | + sleep 5 2025-07-12 15:28:52.669445 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 15:28:52.705883 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-12 15:28:52.705983 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 15:28:52.705999 | orchestrator | + sleep 5 2025-07-12 15:28:57.709239 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 15:28:57.756873 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-12 15:28:57.756943 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 15:28:57.756951 | orchestrator | + sleep 5 2025-07-12 15:29:02.761588 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 15:29:02.799364 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-12 15:29:02.799442 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 15:29:02.799456 | orchestrator | + sleep 5 2025-07-12 15:29:07.804591 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 15:29:07.840451 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-12 15:29:07.840548 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 15:29:07.840563 | orchestrator | + sleep 5 2025-07-12 15:29:12.846235 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 15:29:12.866845 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-12 15:29:12.866901 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 15:29:12.866914 | orchestrator | + sleep 5 2025-07-12 15:29:17.868894 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 15:29:17.897906 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-12 15:29:17.897972 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 15:29:17.897987 | orchestrator | + sleep 5 2025-07-12 15:29:22.904249 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 15:29:22.941683 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-12 15:29:22.941823 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 15:29:22.941850 | orchestrator | + sleep 5 2025-07-12 15:29:27.946273 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 15:29:27.978103 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-12 15:29:27.978198 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 15:29:27.978212 | orchestrator | + sleep 5 2025-07-12 15:29:32.981439 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 15:29:33.021086 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-12 15:29:33.021174 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 15:29:33.021190 | orchestrator | + sleep 5 2025-07-12 15:29:38.026097 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 15:29:38.068170 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-12 15:29:38.068272 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 15:29:38.068288 | orchestrator | + sleep 5 2025-07-12 15:29:43.073299 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 15:29:43.105245 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-12 15:29:43.105334 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-07-12 15:29:43.105349 | orchestrator | + local max_attempts=60 2025-07-12 15:29:43.105361 | orchestrator | + local name=kolla-ansible 2025-07-12 15:29:43.105372 | orchestrator | + local attempt_num=1 2025-07-12 15:29:43.105384 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-07-12 15:29:43.138615 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-12 15:29:43.138813 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-07-12 15:29:43.138832 | orchestrator | + local max_attempts=60 2025-07-12 15:29:43.138843 | orchestrator | + local name=osism-ansible 2025-07-12 15:29:43.138855 | orchestrator | + local attempt_num=1 2025-07-12 15:29:43.138876 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-07-12 15:29:43.177108 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-12 15:29:43.177141 | orchestrator | + [[ true == \t\r\u\e ]] 2025-07-12 15:29:43.177154 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-07-12 15:29:43.342291 | orchestrator | ARA in ceph-ansible already disabled. 2025-07-12 15:29:43.495478 | orchestrator | ARA in kolla-ansible already disabled. 2025-07-12 15:29:43.647829 | orchestrator | ARA in osism-ansible already disabled. 2025-07-12 15:29:43.794126 | orchestrator | ARA in osism-kubernetes already disabled. 2025-07-12 15:29:43.794220 | orchestrator | + osism apply gather-facts 2025-07-12 15:29:55.526598 | orchestrator | 2025-07-12 15:29:55 | INFO  | Task 9a76a54a-cc72-41b6-a0bd-26446976e540 (gather-facts) was prepared for execution. 2025-07-12 15:29:55.526745 | orchestrator | 2025-07-12 15:29:55 | INFO  | It takes a moment until task 9a76a54a-cc72-41b6-a0bd-26446976e540 (gather-facts) has been started and output is visible here. 2025-07-12 15:30:07.793093 | orchestrator | 2025-07-12 15:30:07.793246 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-12 15:30:07.793276 | orchestrator | 2025-07-12 15:30:07.793298 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-12 15:30:07.793316 | orchestrator | Saturday 12 July 2025 15:29:58 +0000 (0:00:00.163) 0:00:00.163 ********* 2025-07-12 15:30:07.793335 | orchestrator | ok: [testbed-manager] 2025-07-12 15:30:07.793354 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:30:07.793372 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:30:07.793388 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:30:07.793405 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:30:07.793422 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:30:07.793439 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:30:07.793458 | orchestrator | 2025-07-12 15:30:07.793476 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-07-12 15:30:07.793496 | orchestrator | 2025-07-12 15:30:07.793515 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-07-12 15:30:07.793533 | orchestrator | Saturday 12 July 2025 15:30:06 +0000 (0:00:07.990) 0:00:08.153 ********* 2025-07-12 15:30:07.793552 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:30:07.793572 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:30:07.793592 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:30:07.793611 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:30:07.793656 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:30:07.793678 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:30:07.793728 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:30:07.793750 | orchestrator | 2025-07-12 15:30:07.793770 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:30:07.793788 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 15:30:07.793808 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 15:30:07.793826 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 15:30:07.793846 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 15:30:07.793867 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 15:30:07.793890 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 15:30:07.793911 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 15:30:07.793931 | orchestrator | 2025-07-12 15:30:07.793955 | orchestrator | 2025-07-12 15:30:07.793976 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:30:07.793996 | orchestrator | Saturday 12 July 2025 15:30:07 +0000 (0:00:00.475) 0:00:08.629 ********* 2025-07-12 15:30:07.794016 | orchestrator | =============================================================================== 2025-07-12 15:30:07.794128 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.99s 2025-07-12 15:30:07.794149 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.48s 2025-07-12 15:30:08.040577 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-07-12 15:30:08.061198 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-07-12 15:30:08.076045 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-07-12 15:30:08.090084 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-07-12 15:30:08.108991 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-07-12 15:30:08.122894 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-07-12 15:30:08.135802 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-07-12 15:30:08.155296 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-07-12 15:30:08.169070 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-07-12 15:30:08.188046 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-07-12 15:30:08.204299 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-07-12 15:30:08.224066 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-07-12 15:30:08.236692 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-07-12 15:30:08.253952 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-07-12 15:30:08.269984 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-07-12 15:30:08.280805 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-07-12 15:30:08.297603 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-07-12 15:30:08.317004 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-07-12 15:30:08.333146 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-07-12 15:30:08.350858 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-07-12 15:30:08.370269 | orchestrator | + [[ false == \t\r\u\e ]] 2025-07-12 15:30:08.775132 | orchestrator | ok: Runtime: 0:22:19.339214 2025-07-12 15:30:08.871097 | 2025-07-12 15:30:08.871235 | TASK [Deploy services] 2025-07-12 15:30:09.401985 | orchestrator | skipping: Conditional result was False 2025-07-12 15:30:09.416687 | 2025-07-12 15:30:09.416919 | TASK [Deploy in a nutshell] 2025-07-12 15:30:10.101809 | orchestrator | + set -e 2025-07-12 15:30:10.102003 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-12 15:30:10.102076 | orchestrator | ++ export INTERACTIVE=false 2025-07-12 15:30:10.102108 | orchestrator | ++ INTERACTIVE=false 2025-07-12 15:30:10.102131 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-12 15:30:10.102144 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-12 15:30:10.102156 | orchestrator | + source /opt/manager-vars.sh 2025-07-12 15:30:10.102200 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-12 15:30:10.102227 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-12 15:30:10.102240 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-12 15:30:10.102254 | orchestrator | ++ CEPH_VERSION=reef 2025-07-12 15:30:10.102264 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-12 15:30:10.102281 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-12 15:30:10.102291 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-07-12 15:30:10.102309 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-07-12 15:30:10.102319 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-12 15:30:10.102330 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-12 15:30:10.102340 | orchestrator | ++ export ARA=false 2025-07-12 15:30:10.102350 | orchestrator | ++ ARA=false 2025-07-12 15:30:10.102359 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-12 15:30:10.102370 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-12 15:30:10.102393 | orchestrator | ++ export TEMPEST=false 2025-07-12 15:30:10.102403 | orchestrator | ++ TEMPEST=false 2025-07-12 15:30:10.102413 | orchestrator | ++ export IS_ZUUL=true 2025-07-12 15:30:10.102423 | orchestrator | ++ IS_ZUUL=true 2025-07-12 15:30:10.102433 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.204 2025-07-12 15:30:10.102443 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.204 2025-07-12 15:30:10.102453 | orchestrator | ++ export EXTERNAL_API=false 2025-07-12 15:30:10.102463 | orchestrator | ++ EXTERNAL_API=false 2025-07-12 15:30:10.102472 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-12 15:30:10.102482 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-12 15:30:10.102491 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-12 15:30:10.102501 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-12 15:30:10.102510 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-12 15:30:10.102520 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-12 15:30:10.102530 | orchestrator | 2025-07-12 15:30:10.102539 | orchestrator | # PULL IMAGES 2025-07-12 15:30:10.102549 | orchestrator | 2025-07-12 15:30:10.102559 | orchestrator | + echo 2025-07-12 15:30:10.102569 | orchestrator | + echo '# PULL IMAGES' 2025-07-12 15:30:10.102578 | orchestrator | + echo 2025-07-12 15:30:10.103016 | orchestrator | ++ semver 9.2.0 7.0.0 2025-07-12 15:30:10.150061 | orchestrator | + [[ 1 -ge 0 ]] 2025-07-12 15:30:10.150167 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-07-12 15:30:11.700414 | orchestrator | 2025-07-12 15:30:11 | INFO  | Trying to run play pull-images in environment custom 2025-07-12 15:30:21.875607 | orchestrator | 2025-07-12 15:30:21 | INFO  | Task db382a5e-4ca9-47e6-a9c6-2554202820f5 (pull-images) was prepared for execution. 2025-07-12 15:30:21.875776 | orchestrator | 2025-07-12 15:30:21 | INFO  | It takes a moment until task db382a5e-4ca9-47e6-a9c6-2554202820f5 (pull-images) has been started and output is visible here. 2025-07-12 15:32:23.325709 | orchestrator | 2025-07-12 15:32:23.325827 | orchestrator | PLAY [Pull images] ************************************************************* 2025-07-12 15:32:23.325846 | orchestrator | 2025-07-12 15:32:23.325858 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-07-12 15:32:23.325884 | orchestrator | Saturday 12 July 2025 15:30:25 +0000 (0:00:00.155) 0:00:00.155 ********* 2025-07-12 15:32:23.325896 | orchestrator | changed: [testbed-manager] 2025-07-12 15:32:23.325908 | orchestrator | 2025-07-12 15:32:23.325919 | orchestrator | TASK [Pull other images] ******************************************************* 2025-07-12 15:32:23.325930 | orchestrator | Saturday 12 July 2025 15:31:27 +0000 (0:01:01.361) 0:01:01.517 ********* 2025-07-12 15:32:23.325942 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-07-12 15:32:23.325957 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-07-12 15:32:23.325968 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-07-12 15:32:23.325979 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-07-12 15:32:23.325990 | orchestrator | changed: [testbed-manager] => (item=common) 2025-07-12 15:32:23.326001 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-07-12 15:32:23.326100 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-07-12 15:32:23.326113 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-07-12 15:32:23.326129 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-07-12 15:32:23.326141 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-07-12 15:32:23.326151 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-07-12 15:32:23.326163 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-07-12 15:32:23.326173 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-07-12 15:32:23.326184 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-07-12 15:32:23.326194 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-07-12 15:32:23.326205 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-07-12 15:32:23.326215 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-07-12 15:32:23.326226 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-07-12 15:32:23.326237 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-07-12 15:32:23.326247 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-07-12 15:32:23.326258 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-07-12 15:32:23.326269 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-07-12 15:32:23.326279 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-07-12 15:32:23.326290 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-07-12 15:32:23.326300 | orchestrator | 2025-07-12 15:32:23.326312 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:32:23.326323 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:32:23.326335 | orchestrator | 2025-07-12 15:32:23.326346 | orchestrator | 2025-07-12 15:32:23.326356 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:32:23.326367 | orchestrator | Saturday 12 July 2025 15:32:23 +0000 (0:00:56.025) 0:01:57.542 ********* 2025-07-12 15:32:23.326378 | orchestrator | =============================================================================== 2025-07-12 15:32:23.326389 | orchestrator | Pull keystone image ---------------------------------------------------- 61.36s 2025-07-12 15:32:23.326400 | orchestrator | Pull other images ------------------------------------------------------ 56.03s 2025-07-12 15:32:25.442360 | orchestrator | 2025-07-12 15:32:25 | INFO  | Trying to run play wipe-partitions in environment custom 2025-07-12 15:32:35.530652 | orchestrator | 2025-07-12 15:32:35 | INFO  | Task 058da7f8-c1a7-4f41-abcd-3d35dd1f7b19 (wipe-partitions) was prepared for execution. 2025-07-12 15:32:35.530753 | orchestrator | 2025-07-12 15:32:35 | INFO  | It takes a moment until task 058da7f8-c1a7-4f41-abcd-3d35dd1f7b19 (wipe-partitions) has been started and output is visible here. 2025-07-12 15:32:46.888051 | orchestrator | 2025-07-12 15:32:46.888165 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-07-12 15:32:46.888189 | orchestrator | 2025-07-12 15:32:46.888210 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-07-12 15:32:46.888229 | orchestrator | Saturday 12 July 2025 15:32:39 +0000 (0:00:00.122) 0:00:00.122 ********* 2025-07-12 15:32:46.888247 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:32:46.888268 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:32:46.888286 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:32:46.888304 | orchestrator | 2025-07-12 15:32:46.888323 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-07-12 15:32:46.888342 | orchestrator | Saturday 12 July 2025 15:32:39 +0000 (0:00:00.564) 0:00:00.687 ********* 2025-07-12 15:32:46.888374 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:32:46.888424 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:32:46.888436 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:32:46.888447 | orchestrator | 2025-07-12 15:32:46.888457 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-07-12 15:32:46.888487 | orchestrator | Saturday 12 July 2025 15:32:39 +0000 (0:00:00.251) 0:00:00.938 ********* 2025-07-12 15:32:46.888499 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:32:46.888510 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:32:46.888521 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:32:46.888531 | orchestrator | 2025-07-12 15:32:46.888542 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-07-12 15:32:46.888552 | orchestrator | Saturday 12 July 2025 15:32:40 +0000 (0:00:00.677) 0:00:01.616 ********* 2025-07-12 15:32:46.888563 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:32:46.888640 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:32:46.888654 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:32:46.888667 | orchestrator | 2025-07-12 15:32:46.888679 | orchestrator | TASK [Check device availability] *********************************************** 2025-07-12 15:32:46.888692 | orchestrator | Saturday 12 July 2025 15:32:40 +0000 (0:00:00.240) 0:00:01.857 ********* 2025-07-12 15:32:46.888704 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-07-12 15:32:46.888717 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-07-12 15:32:46.888729 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-07-12 15:32:46.888742 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-07-12 15:32:46.888754 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-07-12 15:32:46.888770 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-07-12 15:32:46.888783 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-07-12 15:32:46.888794 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-07-12 15:32:46.888807 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-07-12 15:32:46.888819 | orchestrator | 2025-07-12 15:32:46.888831 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-07-12 15:32:46.888843 | orchestrator | Saturday 12 July 2025 15:32:41 +0000 (0:00:01.214) 0:00:03.071 ********* 2025-07-12 15:32:46.888856 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-07-12 15:32:46.888869 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-07-12 15:32:46.888881 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-07-12 15:32:46.888892 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-07-12 15:32:46.888904 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-07-12 15:32:46.888916 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-07-12 15:32:46.888928 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-07-12 15:32:46.888939 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-07-12 15:32:46.888950 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-07-12 15:32:46.888960 | orchestrator | 2025-07-12 15:32:46.888971 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-07-12 15:32:46.888982 | orchestrator | Saturday 12 July 2025 15:32:43 +0000 (0:00:01.368) 0:00:04.439 ********* 2025-07-12 15:32:46.888993 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-07-12 15:32:46.889003 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-07-12 15:32:46.889014 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-07-12 15:32:46.889025 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-07-12 15:32:46.889035 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-07-12 15:32:46.889046 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-07-12 15:32:46.889057 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-07-12 15:32:46.889067 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-07-12 15:32:46.889078 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-07-12 15:32:46.889088 | orchestrator | 2025-07-12 15:32:46.889099 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-07-12 15:32:46.889110 | orchestrator | Saturday 12 July 2025 15:32:45 +0000 (0:00:02.133) 0:00:06.572 ********* 2025-07-12 15:32:46.889121 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:32:46.889139 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:32:46.889149 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:32:46.889160 | orchestrator | 2025-07-12 15:32:46.889171 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-07-12 15:32:46.889182 | orchestrator | Saturday 12 July 2025 15:32:46 +0000 (0:00:00.579) 0:00:07.152 ********* 2025-07-12 15:32:46.889192 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:32:46.889203 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:32:46.889214 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:32:46.889224 | orchestrator | 2025-07-12 15:32:46.889235 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:32:46.889252 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 15:32:46.889264 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 15:32:46.889294 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 15:32:46.889306 | orchestrator | 2025-07-12 15:32:46.889317 | orchestrator | 2025-07-12 15:32:46.889327 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:32:46.889338 | orchestrator | Saturday 12 July 2025 15:32:46 +0000 (0:00:00.607) 0:00:07.760 ********* 2025-07-12 15:32:46.889349 | orchestrator | =============================================================================== 2025-07-12 15:32:46.889360 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.13s 2025-07-12 15:32:46.889370 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.37s 2025-07-12 15:32:46.889381 | orchestrator | Check device availability ----------------------------------------------- 1.21s 2025-07-12 15:32:46.889391 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.68s 2025-07-12 15:32:46.889402 | orchestrator | Request device events from the kernel ----------------------------------- 0.61s 2025-07-12 15:32:46.889413 | orchestrator | Reload udev rules ------------------------------------------------------- 0.58s 2025-07-12 15:32:46.889423 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.56s 2025-07-12 15:32:46.889434 | orchestrator | Remove all rook related logical devices --------------------------------- 0.25s 2025-07-12 15:32:46.889444 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.24s 2025-07-12 15:32:58.821108 | orchestrator | 2025-07-12 15:32:58 | INFO  | Task 361f79dc-aebc-4be6-a1da-98729686474e (facts) was prepared for execution. 2025-07-12 15:32:58.821270 | orchestrator | 2025-07-12 15:32:58 | INFO  | It takes a moment until task 361f79dc-aebc-4be6-a1da-98729686474e (facts) has been started and output is visible here. 2025-07-12 15:33:11.553112 | orchestrator | 2025-07-12 15:33:11.553226 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-07-12 15:33:11.553241 | orchestrator | 2025-07-12 15:33:11.553253 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-07-12 15:33:11.553265 | orchestrator | Saturday 12 July 2025 15:33:03 +0000 (0:00:00.336) 0:00:00.336 ********* 2025-07-12 15:33:11.553276 | orchestrator | ok: [testbed-manager] 2025-07-12 15:33:11.553288 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:33:11.553299 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:33:11.553309 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:33:11.553320 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:33:11.553331 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:33:11.553341 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:33:11.553352 | orchestrator | 2025-07-12 15:33:11.553363 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-07-12 15:33:11.553376 | orchestrator | Saturday 12 July 2025 15:33:04 +0000 (0:00:01.331) 0:00:01.668 ********* 2025-07-12 15:33:11.553417 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:33:11.553430 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:33:11.553440 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:33:11.553451 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:33:11.553461 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:33:11.553472 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:33:11.553482 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:33:11.553493 | orchestrator | 2025-07-12 15:33:11.553504 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-12 15:33:11.553514 | orchestrator | 2025-07-12 15:33:11.553525 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-12 15:33:11.553536 | orchestrator | Saturday 12 July 2025 15:33:05 +0000 (0:00:01.364) 0:00:03.033 ********* 2025-07-12 15:33:11.553546 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:33:11.553587 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:33:11.553598 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:33:11.553609 | orchestrator | ok: [testbed-manager] 2025-07-12 15:33:11.553619 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:33:11.553630 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:33:11.553642 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:33:11.553654 | orchestrator | 2025-07-12 15:33:11.553666 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-07-12 15:33:11.553678 | orchestrator | 2025-07-12 15:33:11.553690 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-07-12 15:33:11.553702 | orchestrator | Saturday 12 July 2025 15:33:10 +0000 (0:00:04.730) 0:00:07.764 ********* 2025-07-12 15:33:11.553715 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:33:11.553727 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:33:11.553739 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:33:11.553751 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:33:11.553763 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:33:11.553775 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:33:11.553787 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:33:11.553798 | orchestrator | 2025-07-12 15:33:11.553826 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:33:11.553839 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 15:33:11.553853 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 15:33:11.553865 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 15:33:11.553877 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 15:33:11.553889 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 15:33:11.553900 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 15:33:11.553913 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 15:33:11.553924 | orchestrator | 2025-07-12 15:33:11.553936 | orchestrator | 2025-07-12 15:33:11.553948 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:33:11.553961 | orchestrator | Saturday 12 July 2025 15:33:11 +0000 (0:00:00.560) 0:00:08.325 ********* 2025-07-12 15:33:11.553973 | orchestrator | =============================================================================== 2025-07-12 15:33:11.553985 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.73s 2025-07-12 15:33:11.554012 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.36s 2025-07-12 15:33:11.554106 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.33s 2025-07-12 15:33:11.554124 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.56s 2025-07-12 15:33:13.707444 | orchestrator | 2025-07-12 15:33:13 | INFO  | Task 0fb36c54-e47c-4ca0-9f74-314f7c5cb952 (ceph-configure-lvm-volumes) was prepared for execution. 2025-07-12 15:33:13.707878 | orchestrator | 2025-07-12 15:33:13 | INFO  | It takes a moment until task 0fb36c54-e47c-4ca0-9f74-314f7c5cb952 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-07-12 15:33:24.974479 | orchestrator | 2025-07-12 15:33:24.974645 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-07-12 15:33:24.974665 | orchestrator | 2025-07-12 15:33:24.974677 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-12 15:33:24.974689 | orchestrator | Saturday 12 July 2025 15:33:17 +0000 (0:00:00.288) 0:00:00.288 ********* 2025-07-12 15:33:24.974702 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 15:33:24.974713 | orchestrator | 2025-07-12 15:33:24.974724 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-12 15:33:24.974755 | orchestrator | Saturday 12 July 2025 15:33:17 +0000 (0:00:00.217) 0:00:00.505 ********* 2025-07-12 15:33:24.974779 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:33:24.974792 | orchestrator | 2025-07-12 15:33:24.974803 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:24.974814 | orchestrator | Saturday 12 July 2025 15:33:17 +0000 (0:00:00.200) 0:00:00.705 ********* 2025-07-12 15:33:24.974825 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-07-12 15:33:24.974836 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-07-12 15:33:24.974847 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-07-12 15:33:24.974857 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-07-12 15:33:24.974868 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-07-12 15:33:24.974879 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-07-12 15:33:24.974892 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-07-12 15:33:24.974903 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-07-12 15:33:24.974914 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-07-12 15:33:24.974925 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-07-12 15:33:24.974935 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-07-12 15:33:24.974946 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-07-12 15:33:24.974957 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-07-12 15:33:24.974968 | orchestrator | 2025-07-12 15:33:24.974979 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:24.974992 | orchestrator | Saturday 12 July 2025 15:33:18 +0000 (0:00:00.317) 0:00:01.023 ********* 2025-07-12 15:33:24.975004 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:33:24.975016 | orchestrator | 2025-07-12 15:33:24.975029 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:24.975041 | orchestrator | Saturday 12 July 2025 15:33:18 +0000 (0:00:00.467) 0:00:01.490 ********* 2025-07-12 15:33:24.975053 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:33:24.975065 | orchestrator | 2025-07-12 15:33:24.975077 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:24.975116 | orchestrator | Saturday 12 July 2025 15:33:18 +0000 (0:00:00.189) 0:00:01.680 ********* 2025-07-12 15:33:24.975129 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:33:24.975141 | orchestrator | 2025-07-12 15:33:24.975154 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:24.975166 | orchestrator | Saturday 12 July 2025 15:33:19 +0000 (0:00:00.201) 0:00:01.881 ********* 2025-07-12 15:33:24.975179 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:33:24.975191 | orchestrator | 2025-07-12 15:33:24.975202 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:24.975215 | orchestrator | Saturday 12 July 2025 15:33:19 +0000 (0:00:00.217) 0:00:02.099 ********* 2025-07-12 15:33:24.975227 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:33:24.975239 | orchestrator | 2025-07-12 15:33:24.975251 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:24.975263 | orchestrator | Saturday 12 July 2025 15:33:19 +0000 (0:00:00.205) 0:00:02.304 ********* 2025-07-12 15:33:24.975275 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:33:24.975287 | orchestrator | 2025-07-12 15:33:24.975300 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:24.975321 | orchestrator | Saturday 12 July 2025 15:33:19 +0000 (0:00:00.206) 0:00:02.511 ********* 2025-07-12 15:33:24.975334 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:33:24.975344 | orchestrator | 2025-07-12 15:33:24.975355 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:24.975366 | orchestrator | Saturday 12 July 2025 15:33:19 +0000 (0:00:00.203) 0:00:02.715 ********* 2025-07-12 15:33:24.975377 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:33:24.975387 | orchestrator | 2025-07-12 15:33:24.975398 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:24.975409 | orchestrator | Saturday 12 July 2025 15:33:20 +0000 (0:00:00.216) 0:00:02.931 ********* 2025-07-12 15:33:24.975419 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b) 2025-07-12 15:33:24.975431 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b) 2025-07-12 15:33:24.975442 | orchestrator | 2025-07-12 15:33:24.975453 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:24.975463 | orchestrator | Saturday 12 July 2025 15:33:20 +0000 (0:00:00.406) 0:00:03.338 ********* 2025-07-12 15:33:24.975493 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c6699afa-886d-4139-8698-8a8fafe98984) 2025-07-12 15:33:24.975505 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c6699afa-886d-4139-8698-8a8fafe98984) 2025-07-12 15:33:24.975516 | orchestrator | 2025-07-12 15:33:24.975527 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:24.975537 | orchestrator | Saturday 12 July 2025 15:33:20 +0000 (0:00:00.393) 0:00:03.731 ********* 2025-07-12 15:33:24.975589 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4e5b43f9-5557-4a03-9895-8e671249b5b2) 2025-07-12 15:33:24.975601 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4e5b43f9-5557-4a03-9895-8e671249b5b2) 2025-07-12 15:33:24.975612 | orchestrator | 2025-07-12 15:33:24.975622 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:24.975633 | orchestrator | Saturday 12 July 2025 15:33:21 +0000 (0:00:00.620) 0:00:04.352 ********* 2025-07-12 15:33:24.975644 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0aec1d56-840e-4d62-87fc-8ad42993ed21) 2025-07-12 15:33:24.975654 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0aec1d56-840e-4d62-87fc-8ad42993ed21) 2025-07-12 15:33:24.975665 | orchestrator | 2025-07-12 15:33:24.975676 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:24.975687 | orchestrator | Saturday 12 July 2025 15:33:22 +0000 (0:00:00.637) 0:00:04.990 ********* 2025-07-12 15:33:24.975705 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-12 15:33:24.975716 | orchestrator | 2025-07-12 15:33:24.975727 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:24.975738 | orchestrator | Saturday 12 July 2025 15:33:22 +0000 (0:00:00.758) 0:00:05.748 ********* 2025-07-12 15:33:24.975754 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-07-12 15:33:24.975764 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-07-12 15:33:24.975775 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-07-12 15:33:24.975785 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-07-12 15:33:24.975796 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-07-12 15:33:24.975806 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-07-12 15:33:24.975817 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-07-12 15:33:24.975828 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-07-12 15:33:24.975838 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-07-12 15:33:24.975849 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-07-12 15:33:24.975859 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-07-12 15:33:24.975870 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-07-12 15:33:24.975880 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-07-12 15:33:24.975891 | orchestrator | 2025-07-12 15:33:24.975902 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:24.975912 | orchestrator | Saturday 12 July 2025 15:33:23 +0000 (0:00:00.385) 0:00:06.134 ********* 2025-07-12 15:33:24.975923 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:33:24.975933 | orchestrator | 2025-07-12 15:33:24.975944 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:24.975955 | orchestrator | Saturday 12 July 2025 15:33:23 +0000 (0:00:00.194) 0:00:06.328 ********* 2025-07-12 15:33:24.975965 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:33:24.975975 | orchestrator | 2025-07-12 15:33:24.975986 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:24.975997 | orchestrator | Saturday 12 July 2025 15:33:23 +0000 (0:00:00.203) 0:00:06.531 ********* 2025-07-12 15:33:24.976007 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:33:24.976018 | orchestrator | 2025-07-12 15:33:24.976028 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:24.976039 | orchestrator | Saturday 12 July 2025 15:33:23 +0000 (0:00:00.209) 0:00:06.741 ********* 2025-07-12 15:33:24.976049 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:33:24.976060 | orchestrator | 2025-07-12 15:33:24.976070 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:24.976081 | orchestrator | Saturday 12 July 2025 15:33:24 +0000 (0:00:00.211) 0:00:06.953 ********* 2025-07-12 15:33:24.976092 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:33:24.976102 | orchestrator | 2025-07-12 15:33:24.976113 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:24.976124 | orchestrator | Saturday 12 July 2025 15:33:24 +0000 (0:00:00.201) 0:00:07.154 ********* 2025-07-12 15:33:24.976134 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:33:24.976145 | orchestrator | 2025-07-12 15:33:24.976156 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:24.976166 | orchestrator | Saturday 12 July 2025 15:33:24 +0000 (0:00:00.214) 0:00:07.369 ********* 2025-07-12 15:33:24.976183 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:33:24.976194 | orchestrator | 2025-07-12 15:33:24.976205 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:24.976215 | orchestrator | Saturday 12 July 2025 15:33:24 +0000 (0:00:00.189) 0:00:07.558 ********* 2025-07-12 15:33:24.976234 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:33:32.822821 | orchestrator | 2025-07-12 15:33:32.822942 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:32.822970 | orchestrator | Saturday 12 July 2025 15:33:24 +0000 (0:00:00.188) 0:00:07.746 ********* 2025-07-12 15:33:32.822990 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-07-12 15:33:32.823008 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-07-12 15:33:32.823020 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-07-12 15:33:32.823031 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-07-12 15:33:32.823042 | orchestrator | 2025-07-12 15:33:32.823053 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:32.823064 | orchestrator | Saturday 12 July 2025 15:33:26 +0000 (0:00:01.096) 0:00:08.842 ********* 2025-07-12 15:33:32.823075 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:33:32.823086 | orchestrator | 2025-07-12 15:33:32.823097 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:32.823107 | orchestrator | Saturday 12 July 2025 15:33:26 +0000 (0:00:00.225) 0:00:09.068 ********* 2025-07-12 15:33:32.823118 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:33:32.823129 | orchestrator | 2025-07-12 15:33:32.823139 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:32.823150 | orchestrator | Saturday 12 July 2025 15:33:26 +0000 (0:00:00.228) 0:00:09.297 ********* 2025-07-12 15:33:32.823160 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:33:32.823171 | orchestrator | 2025-07-12 15:33:32.823181 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:32.823192 | orchestrator | Saturday 12 July 2025 15:33:26 +0000 (0:00:00.209) 0:00:09.507 ********* 2025-07-12 15:33:32.823202 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:33:32.823213 | orchestrator | 2025-07-12 15:33:32.823224 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-07-12 15:33:32.823235 | orchestrator | Saturday 12 July 2025 15:33:26 +0000 (0:00:00.215) 0:00:09.723 ********* 2025-07-12 15:33:32.823245 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-07-12 15:33:32.823257 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-07-12 15:33:32.823276 | orchestrator | 2025-07-12 15:33:32.823296 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-07-12 15:33:32.823316 | orchestrator | Saturday 12 July 2025 15:33:27 +0000 (0:00:00.171) 0:00:09.895 ********* 2025-07-12 15:33:32.823336 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:33:32.823357 | orchestrator | 2025-07-12 15:33:32.823378 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-07-12 15:33:32.823399 | orchestrator | Saturday 12 July 2025 15:33:27 +0000 (0:00:00.143) 0:00:10.038 ********* 2025-07-12 15:33:32.823419 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:33:32.823439 | orchestrator | 2025-07-12 15:33:32.823452 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-07-12 15:33:32.823464 | orchestrator | Saturday 12 July 2025 15:33:27 +0000 (0:00:00.132) 0:00:10.171 ********* 2025-07-12 15:33:32.823476 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:33:32.823488 | orchestrator | 2025-07-12 15:33:32.823500 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-07-12 15:33:32.823512 | orchestrator | Saturday 12 July 2025 15:33:27 +0000 (0:00:00.132) 0:00:10.304 ********* 2025-07-12 15:33:32.823524 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:33:32.823562 | orchestrator | 2025-07-12 15:33:32.823577 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-07-12 15:33:32.823640 | orchestrator | Saturday 12 July 2025 15:33:27 +0000 (0:00:00.136) 0:00:10.440 ********* 2025-07-12 15:33:32.823662 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0c0189bb-8103-55ae-95fc-ac60d34dc15f'}}) 2025-07-12 15:33:32.823675 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2608adc8-8e22-540f-a74d-9f1d5d1ddc4f'}}) 2025-07-12 15:33:32.823688 | orchestrator | 2025-07-12 15:33:32.823698 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-07-12 15:33:32.823709 | orchestrator | Saturday 12 July 2025 15:33:27 +0000 (0:00:00.176) 0:00:10.617 ********* 2025-07-12 15:33:32.823720 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0c0189bb-8103-55ae-95fc-ac60d34dc15f'}})  2025-07-12 15:33:32.823739 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2608adc8-8e22-540f-a74d-9f1d5d1ddc4f'}})  2025-07-12 15:33:32.823749 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:33:32.823760 | orchestrator | 2025-07-12 15:33:32.823770 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-07-12 15:33:32.823781 | orchestrator | Saturday 12 July 2025 15:33:27 +0000 (0:00:00.152) 0:00:10.770 ********* 2025-07-12 15:33:32.823791 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0c0189bb-8103-55ae-95fc-ac60d34dc15f'}})  2025-07-12 15:33:32.823802 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2608adc8-8e22-540f-a74d-9f1d5d1ddc4f'}})  2025-07-12 15:33:32.823812 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:33:32.823830 | orchestrator | 2025-07-12 15:33:32.823848 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-07-12 15:33:32.823864 | orchestrator | Saturday 12 July 2025 15:33:28 +0000 (0:00:00.137) 0:00:10.908 ********* 2025-07-12 15:33:32.823876 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0c0189bb-8103-55ae-95fc-ac60d34dc15f'}})  2025-07-12 15:33:32.823886 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2608adc8-8e22-540f-a74d-9f1d5d1ddc4f'}})  2025-07-12 15:33:32.823897 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:33:32.823908 | orchestrator | 2025-07-12 15:33:32.823938 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-07-12 15:33:32.823949 | orchestrator | Saturday 12 July 2025 15:33:28 +0000 (0:00:00.349) 0:00:11.257 ********* 2025-07-12 15:33:32.823960 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:33:32.823970 | orchestrator | 2025-07-12 15:33:32.823981 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-07-12 15:33:32.823992 | orchestrator | Saturday 12 July 2025 15:33:28 +0000 (0:00:00.161) 0:00:11.419 ********* 2025-07-12 15:33:32.824002 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:33:32.824013 | orchestrator | 2025-07-12 15:33:32.824023 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-07-12 15:33:32.824036 | orchestrator | Saturday 12 July 2025 15:33:28 +0000 (0:00:00.155) 0:00:11.575 ********* 2025-07-12 15:33:32.824056 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:33:32.824074 | orchestrator | 2025-07-12 15:33:32.824090 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-07-12 15:33:32.824101 | orchestrator | Saturday 12 July 2025 15:33:28 +0000 (0:00:00.146) 0:00:11.721 ********* 2025-07-12 15:33:32.824111 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:33:32.824121 | orchestrator | 2025-07-12 15:33:32.824132 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-07-12 15:33:32.824142 | orchestrator | Saturday 12 July 2025 15:33:29 +0000 (0:00:00.117) 0:00:11.838 ********* 2025-07-12 15:33:32.824153 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:33:32.824163 | orchestrator | 2025-07-12 15:33:32.824174 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-07-12 15:33:32.824185 | orchestrator | Saturday 12 July 2025 15:33:29 +0000 (0:00:00.151) 0:00:11.990 ********* 2025-07-12 15:33:32.824205 | orchestrator | ok: [testbed-node-3] => { 2025-07-12 15:33:32.824216 | orchestrator |  "ceph_osd_devices": { 2025-07-12 15:33:32.824227 | orchestrator |  "sdb": { 2025-07-12 15:33:32.824238 | orchestrator |  "osd_lvm_uuid": "0c0189bb-8103-55ae-95fc-ac60d34dc15f" 2025-07-12 15:33:32.824248 | orchestrator |  }, 2025-07-12 15:33:32.824259 | orchestrator |  "sdc": { 2025-07-12 15:33:32.824275 | orchestrator |  "osd_lvm_uuid": "2608adc8-8e22-540f-a74d-9f1d5d1ddc4f" 2025-07-12 15:33:32.824286 | orchestrator |  } 2025-07-12 15:33:32.824297 | orchestrator |  } 2025-07-12 15:33:32.824308 | orchestrator | } 2025-07-12 15:33:32.824318 | orchestrator | 2025-07-12 15:33:32.824329 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-07-12 15:33:32.824339 | orchestrator | Saturday 12 July 2025 15:33:29 +0000 (0:00:00.144) 0:00:12.135 ********* 2025-07-12 15:33:32.824350 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:33:32.824360 | orchestrator | 2025-07-12 15:33:32.824371 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-07-12 15:33:32.824381 | orchestrator | Saturday 12 July 2025 15:33:29 +0000 (0:00:00.137) 0:00:12.272 ********* 2025-07-12 15:33:32.824391 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:33:32.824402 | orchestrator | 2025-07-12 15:33:32.824412 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-07-12 15:33:32.824423 | orchestrator | Saturday 12 July 2025 15:33:29 +0000 (0:00:00.143) 0:00:12.416 ********* 2025-07-12 15:33:32.824433 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:33:32.824444 | orchestrator | 2025-07-12 15:33:32.824454 | orchestrator | TASK [Print configuration data] ************************************************ 2025-07-12 15:33:32.824465 | orchestrator | Saturday 12 July 2025 15:33:29 +0000 (0:00:00.136) 0:00:12.552 ********* 2025-07-12 15:33:32.824476 | orchestrator | changed: [testbed-node-3] => { 2025-07-12 15:33:32.824486 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-07-12 15:33:32.824497 | orchestrator |  "ceph_osd_devices": { 2025-07-12 15:33:32.824508 | orchestrator |  "sdb": { 2025-07-12 15:33:32.824518 | orchestrator |  "osd_lvm_uuid": "0c0189bb-8103-55ae-95fc-ac60d34dc15f" 2025-07-12 15:33:32.824529 | orchestrator |  }, 2025-07-12 15:33:32.824582 | orchestrator |  "sdc": { 2025-07-12 15:33:32.824595 | orchestrator |  "osd_lvm_uuid": "2608adc8-8e22-540f-a74d-9f1d5d1ddc4f" 2025-07-12 15:33:32.824605 | orchestrator |  } 2025-07-12 15:33:32.824616 | orchestrator |  }, 2025-07-12 15:33:32.824631 | orchestrator |  "lvm_volumes": [ 2025-07-12 15:33:32.824642 | orchestrator |  { 2025-07-12 15:33:32.824654 | orchestrator |  "data": "osd-block-0c0189bb-8103-55ae-95fc-ac60d34dc15f", 2025-07-12 15:33:32.824664 | orchestrator |  "data_vg": "ceph-0c0189bb-8103-55ae-95fc-ac60d34dc15f" 2025-07-12 15:33:32.824675 | orchestrator |  }, 2025-07-12 15:33:32.824685 | orchestrator |  { 2025-07-12 15:33:32.824696 | orchestrator |  "data": "osd-block-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f", 2025-07-12 15:33:32.824706 | orchestrator |  "data_vg": "ceph-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f" 2025-07-12 15:33:32.824717 | orchestrator |  } 2025-07-12 15:33:32.824728 | orchestrator |  ] 2025-07-12 15:33:32.824738 | orchestrator |  } 2025-07-12 15:33:32.824749 | orchestrator | } 2025-07-12 15:33:32.824759 | orchestrator | 2025-07-12 15:33:32.824770 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-07-12 15:33:32.824780 | orchestrator | Saturday 12 July 2025 15:33:29 +0000 (0:00:00.200) 0:00:12.753 ********* 2025-07-12 15:33:32.824791 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 15:33:32.824801 | orchestrator | 2025-07-12 15:33:32.824812 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-07-12 15:33:32.824822 | orchestrator | 2025-07-12 15:33:32.824833 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-12 15:33:32.824851 | orchestrator | Saturday 12 July 2025 15:33:32 +0000 (0:00:02.344) 0:00:15.098 ********* 2025-07-12 15:33:32.824862 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-07-12 15:33:32.824872 | orchestrator | 2025-07-12 15:33:32.824883 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-12 15:33:32.824893 | orchestrator | Saturday 12 July 2025 15:33:32 +0000 (0:00:00.257) 0:00:15.356 ********* 2025-07-12 15:33:32.824903 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:33:32.824914 | orchestrator | 2025-07-12 15:33:32.824925 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:32.824944 | orchestrator | Saturday 12 July 2025 15:33:32 +0000 (0:00:00.241) 0:00:15.597 ********* 2025-07-12 15:33:41.482126 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-07-12 15:33:41.482233 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-07-12 15:33:41.482247 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-07-12 15:33:41.482258 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-07-12 15:33:41.482270 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-07-12 15:33:41.482280 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-07-12 15:33:41.482291 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-07-12 15:33:41.482302 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-07-12 15:33:41.482333 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-07-12 15:33:41.482345 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-07-12 15:33:41.482356 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-07-12 15:33:41.482366 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-07-12 15:33:41.482377 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-07-12 15:33:41.482388 | orchestrator | 2025-07-12 15:33:41.482399 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:41.482411 | orchestrator | Saturday 12 July 2025 15:33:33 +0000 (0:00:00.415) 0:00:16.013 ********* 2025-07-12 15:33:41.482422 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:33:41.482434 | orchestrator | 2025-07-12 15:33:41.482445 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:41.482456 | orchestrator | Saturday 12 July 2025 15:33:33 +0000 (0:00:00.206) 0:00:16.220 ********* 2025-07-12 15:33:41.482467 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:33:41.482478 | orchestrator | 2025-07-12 15:33:41.482488 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:41.482499 | orchestrator | Saturday 12 July 2025 15:33:33 +0000 (0:00:00.207) 0:00:16.428 ********* 2025-07-12 15:33:41.482510 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:33:41.482521 | orchestrator | 2025-07-12 15:33:41.482607 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:41.482622 | orchestrator | Saturday 12 July 2025 15:33:33 +0000 (0:00:00.201) 0:00:16.629 ********* 2025-07-12 15:33:41.482636 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:33:41.482648 | orchestrator | 2025-07-12 15:33:41.482660 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:41.482672 | orchestrator | Saturday 12 July 2025 15:33:34 +0000 (0:00:00.188) 0:00:16.818 ********* 2025-07-12 15:33:41.482684 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:33:41.482696 | orchestrator | 2025-07-12 15:33:41.482708 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:41.482750 | orchestrator | Saturday 12 July 2025 15:33:34 +0000 (0:00:00.212) 0:00:17.031 ********* 2025-07-12 15:33:41.482770 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:33:41.482788 | orchestrator | 2025-07-12 15:33:41.482809 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:41.482829 | orchestrator | Saturday 12 July 2025 15:33:34 +0000 (0:00:00.656) 0:00:17.688 ********* 2025-07-12 15:33:41.482849 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:33:41.482867 | orchestrator | 2025-07-12 15:33:41.482887 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:41.482899 | orchestrator | Saturday 12 July 2025 15:33:35 +0000 (0:00:00.219) 0:00:17.907 ********* 2025-07-12 15:33:41.482911 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:33:41.482923 | orchestrator | 2025-07-12 15:33:41.482935 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:41.482947 | orchestrator | Saturday 12 July 2025 15:33:35 +0000 (0:00:00.201) 0:00:18.109 ********* 2025-07-12 15:33:41.482960 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd) 2025-07-12 15:33:41.482972 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd) 2025-07-12 15:33:41.482982 | orchestrator | 2025-07-12 15:33:41.482993 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:41.483003 | orchestrator | Saturday 12 July 2025 15:33:35 +0000 (0:00:00.423) 0:00:18.533 ********* 2025-07-12 15:33:41.483014 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9415964e-ba41-448d-be5c-d5fc92ddea3f) 2025-07-12 15:33:41.483024 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9415964e-ba41-448d-be5c-d5fc92ddea3f) 2025-07-12 15:33:41.483035 | orchestrator | 2025-07-12 15:33:41.483045 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:41.483056 | orchestrator | Saturday 12 July 2025 15:33:36 +0000 (0:00:00.447) 0:00:18.981 ********* 2025-07-12 15:33:41.483067 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_df26c144-7e2c-487c-9e8f-effdfe3555dd) 2025-07-12 15:33:41.483078 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_df26c144-7e2c-487c-9e8f-effdfe3555dd) 2025-07-12 15:33:41.483088 | orchestrator | 2025-07-12 15:33:41.483099 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:41.483109 | orchestrator | Saturday 12 July 2025 15:33:36 +0000 (0:00:00.543) 0:00:19.524 ********* 2025-07-12 15:33:41.483139 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_80301f58-6d09-4d29-bcb1-b411833d1e96) 2025-07-12 15:33:41.483151 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_80301f58-6d09-4d29-bcb1-b411833d1e96) 2025-07-12 15:33:41.483162 | orchestrator | 2025-07-12 15:33:41.483172 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:41.483183 | orchestrator | Saturday 12 July 2025 15:33:37 +0000 (0:00:00.396) 0:00:19.921 ********* 2025-07-12 15:33:41.483193 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-12 15:33:41.483204 | orchestrator | 2025-07-12 15:33:41.483215 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:41.483225 | orchestrator | Saturday 12 July 2025 15:33:37 +0000 (0:00:00.527) 0:00:20.448 ********* 2025-07-12 15:33:41.483243 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-07-12 15:33:41.483254 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-07-12 15:33:41.483265 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-07-12 15:33:41.483275 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-07-12 15:33:41.483286 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-07-12 15:33:41.483305 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-07-12 15:33:41.483316 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-07-12 15:33:41.483326 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-07-12 15:33:41.483337 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-07-12 15:33:41.483347 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-07-12 15:33:41.483357 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-07-12 15:33:41.483368 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-07-12 15:33:41.483378 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-07-12 15:33:41.483389 | orchestrator | 2025-07-12 15:33:41.483399 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:41.483410 | orchestrator | Saturday 12 July 2025 15:33:38 +0000 (0:00:00.602) 0:00:21.050 ********* 2025-07-12 15:33:41.483420 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:33:41.483431 | orchestrator | 2025-07-12 15:33:41.483441 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:41.483452 | orchestrator | Saturday 12 July 2025 15:33:38 +0000 (0:00:00.228) 0:00:21.279 ********* 2025-07-12 15:33:41.483462 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:33:41.483473 | orchestrator | 2025-07-12 15:33:41.483483 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:41.483494 | orchestrator | Saturday 12 July 2025 15:33:39 +0000 (0:00:00.738) 0:00:22.017 ********* 2025-07-12 15:33:41.483504 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:33:41.483515 | orchestrator | 2025-07-12 15:33:41.483525 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:41.483564 | orchestrator | Saturday 12 July 2025 15:33:39 +0000 (0:00:00.199) 0:00:22.217 ********* 2025-07-12 15:33:41.483581 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:33:41.483592 | orchestrator | 2025-07-12 15:33:41.483603 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:41.483613 | orchestrator | Saturday 12 July 2025 15:33:39 +0000 (0:00:00.227) 0:00:22.444 ********* 2025-07-12 15:33:41.483624 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:33:41.483634 | orchestrator | 2025-07-12 15:33:41.483645 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:41.483656 | orchestrator | Saturday 12 July 2025 15:33:39 +0000 (0:00:00.230) 0:00:22.675 ********* 2025-07-12 15:33:41.483666 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:33:41.483677 | orchestrator | 2025-07-12 15:33:41.483687 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:41.483698 | orchestrator | Saturday 12 July 2025 15:33:40 +0000 (0:00:00.264) 0:00:22.939 ********* 2025-07-12 15:33:41.483708 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:33:41.483719 | orchestrator | 2025-07-12 15:33:41.483729 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:41.483740 | orchestrator | Saturday 12 July 2025 15:33:40 +0000 (0:00:00.190) 0:00:23.129 ********* 2025-07-12 15:33:41.483751 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:33:41.483761 | orchestrator | 2025-07-12 15:33:41.483772 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:41.483783 | orchestrator | Saturday 12 July 2025 15:33:40 +0000 (0:00:00.236) 0:00:23.366 ********* 2025-07-12 15:33:41.483796 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-07-12 15:33:41.483816 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-07-12 15:33:41.483835 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-07-12 15:33:41.483856 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-07-12 15:33:41.483877 | orchestrator | 2025-07-12 15:33:41.483887 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:41.483898 | orchestrator | Saturday 12 July 2025 15:33:41 +0000 (0:00:00.661) 0:00:24.028 ********* 2025-07-12 15:33:41.483909 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:33:41.483919 | orchestrator | 2025-07-12 15:33:41.483937 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:47.859124 | orchestrator | Saturday 12 July 2025 15:33:41 +0000 (0:00:00.228) 0:00:24.256 ********* 2025-07-12 15:33:47.859232 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:33:47.859248 | orchestrator | 2025-07-12 15:33:47.859260 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:47.859271 | orchestrator | Saturday 12 July 2025 15:33:41 +0000 (0:00:00.188) 0:00:24.445 ********* 2025-07-12 15:33:47.859282 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:33:47.859293 | orchestrator | 2025-07-12 15:33:47.859303 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:47.859314 | orchestrator | Saturday 12 July 2025 15:33:41 +0000 (0:00:00.217) 0:00:24.663 ********* 2025-07-12 15:33:47.859324 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:33:47.859335 | orchestrator | 2025-07-12 15:33:47.859346 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-07-12 15:33:47.859357 | orchestrator | Saturday 12 July 2025 15:33:42 +0000 (0:00:00.199) 0:00:24.862 ********* 2025-07-12 15:33:47.859368 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-07-12 15:33:47.859379 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-07-12 15:33:47.859389 | orchestrator | 2025-07-12 15:33:47.859400 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-07-12 15:33:47.859429 | orchestrator | Saturday 12 July 2025 15:33:42 +0000 (0:00:00.341) 0:00:25.204 ********* 2025-07-12 15:33:47.859440 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:33:47.859451 | orchestrator | 2025-07-12 15:33:47.859461 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-07-12 15:33:47.859471 | orchestrator | Saturday 12 July 2025 15:33:42 +0000 (0:00:00.138) 0:00:25.343 ********* 2025-07-12 15:33:47.859482 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:33:47.859492 | orchestrator | 2025-07-12 15:33:47.859503 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-07-12 15:33:47.859513 | orchestrator | Saturday 12 July 2025 15:33:42 +0000 (0:00:00.147) 0:00:25.490 ********* 2025-07-12 15:33:47.859523 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:33:47.859572 | orchestrator | 2025-07-12 15:33:47.859583 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-07-12 15:33:47.859594 | orchestrator | Saturday 12 July 2025 15:33:42 +0000 (0:00:00.129) 0:00:25.620 ********* 2025-07-12 15:33:47.859605 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:33:47.859616 | orchestrator | 2025-07-12 15:33:47.859627 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-07-12 15:33:47.859637 | orchestrator | Saturday 12 July 2025 15:33:42 +0000 (0:00:00.130) 0:00:25.751 ********* 2025-07-12 15:33:47.859648 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ed518422-90c3-5ab9-913f-91d667874e9d'}}) 2025-07-12 15:33:47.859661 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '66e431f6-efaf-5b66-8dd9-edbf314ce410'}}) 2025-07-12 15:33:47.859673 | orchestrator | 2025-07-12 15:33:47.859685 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-07-12 15:33:47.859698 | orchestrator | Saturday 12 July 2025 15:33:43 +0000 (0:00:00.173) 0:00:25.924 ********* 2025-07-12 15:33:47.859710 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ed518422-90c3-5ab9-913f-91d667874e9d'}})  2025-07-12 15:33:47.859724 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '66e431f6-efaf-5b66-8dd9-edbf314ce410'}})  2025-07-12 15:33:47.859763 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:33:47.859775 | orchestrator | 2025-07-12 15:33:47.859787 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-07-12 15:33:47.859799 | orchestrator | Saturday 12 July 2025 15:33:43 +0000 (0:00:00.150) 0:00:26.075 ********* 2025-07-12 15:33:47.859811 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ed518422-90c3-5ab9-913f-91d667874e9d'}})  2025-07-12 15:33:47.859824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '66e431f6-efaf-5b66-8dd9-edbf314ce410'}})  2025-07-12 15:33:47.859836 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:33:47.859847 | orchestrator | 2025-07-12 15:33:47.859859 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-07-12 15:33:47.859872 | orchestrator | Saturday 12 July 2025 15:33:43 +0000 (0:00:00.147) 0:00:26.222 ********* 2025-07-12 15:33:47.859884 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ed518422-90c3-5ab9-913f-91d667874e9d'}})  2025-07-12 15:33:47.859896 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '66e431f6-efaf-5b66-8dd9-edbf314ce410'}})  2025-07-12 15:33:47.859908 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:33:47.859920 | orchestrator | 2025-07-12 15:33:47.859932 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-07-12 15:33:47.859945 | orchestrator | Saturday 12 July 2025 15:33:43 +0000 (0:00:00.145) 0:00:26.368 ********* 2025-07-12 15:33:47.859956 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:33:47.859968 | orchestrator | 2025-07-12 15:33:47.859980 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-07-12 15:33:47.859992 | orchestrator | Saturday 12 July 2025 15:33:43 +0000 (0:00:00.140) 0:00:26.508 ********* 2025-07-12 15:33:47.860004 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:33:47.860015 | orchestrator | 2025-07-12 15:33:47.860026 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-07-12 15:33:47.860037 | orchestrator | Saturday 12 July 2025 15:33:43 +0000 (0:00:00.139) 0:00:26.648 ********* 2025-07-12 15:33:47.860047 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:33:47.860058 | orchestrator | 2025-07-12 15:33:47.860086 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-07-12 15:33:47.860097 | orchestrator | Saturday 12 July 2025 15:33:43 +0000 (0:00:00.114) 0:00:26.763 ********* 2025-07-12 15:33:47.860108 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:33:47.860119 | orchestrator | 2025-07-12 15:33:47.860129 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-07-12 15:33:47.860140 | orchestrator | Saturday 12 July 2025 15:33:44 +0000 (0:00:00.328) 0:00:27.091 ********* 2025-07-12 15:33:47.860150 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:33:47.860161 | orchestrator | 2025-07-12 15:33:47.860171 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-07-12 15:33:47.860182 | orchestrator | Saturday 12 July 2025 15:33:44 +0000 (0:00:00.147) 0:00:27.239 ********* 2025-07-12 15:33:47.860192 | orchestrator | ok: [testbed-node-4] => { 2025-07-12 15:33:47.860203 | orchestrator |  "ceph_osd_devices": { 2025-07-12 15:33:47.860214 | orchestrator |  "sdb": { 2025-07-12 15:33:47.860225 | orchestrator |  "osd_lvm_uuid": "ed518422-90c3-5ab9-913f-91d667874e9d" 2025-07-12 15:33:47.860235 | orchestrator |  }, 2025-07-12 15:33:47.860246 | orchestrator |  "sdc": { 2025-07-12 15:33:47.860256 | orchestrator |  "osd_lvm_uuid": "66e431f6-efaf-5b66-8dd9-edbf314ce410" 2025-07-12 15:33:47.860267 | orchestrator |  } 2025-07-12 15:33:47.860277 | orchestrator |  } 2025-07-12 15:33:47.860288 | orchestrator | } 2025-07-12 15:33:47.860299 | orchestrator | 2025-07-12 15:33:47.860309 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-07-12 15:33:47.860320 | orchestrator | Saturday 12 July 2025 15:33:44 +0000 (0:00:00.156) 0:00:27.395 ********* 2025-07-12 15:33:47.860338 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:33:47.860348 | orchestrator | 2025-07-12 15:33:47.860359 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-07-12 15:33:47.860369 | orchestrator | Saturday 12 July 2025 15:33:44 +0000 (0:00:00.139) 0:00:27.535 ********* 2025-07-12 15:33:47.860380 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:33:47.860390 | orchestrator | 2025-07-12 15:33:47.860401 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-07-12 15:33:47.860412 | orchestrator | Saturday 12 July 2025 15:33:44 +0000 (0:00:00.136) 0:00:27.672 ********* 2025-07-12 15:33:47.860422 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:33:47.860433 | orchestrator | 2025-07-12 15:33:47.860443 | orchestrator | TASK [Print configuration data] ************************************************ 2025-07-12 15:33:47.860454 | orchestrator | Saturday 12 July 2025 15:33:45 +0000 (0:00:00.142) 0:00:27.814 ********* 2025-07-12 15:33:47.860464 | orchestrator | changed: [testbed-node-4] => { 2025-07-12 15:33:47.860475 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-07-12 15:33:47.860486 | orchestrator |  "ceph_osd_devices": { 2025-07-12 15:33:47.860496 | orchestrator |  "sdb": { 2025-07-12 15:33:47.860513 | orchestrator |  "osd_lvm_uuid": "ed518422-90c3-5ab9-913f-91d667874e9d" 2025-07-12 15:33:47.860524 | orchestrator |  }, 2025-07-12 15:33:47.860566 | orchestrator |  "sdc": { 2025-07-12 15:33:47.860586 | orchestrator |  "osd_lvm_uuid": "66e431f6-efaf-5b66-8dd9-edbf314ce410" 2025-07-12 15:33:47.860604 | orchestrator |  } 2025-07-12 15:33:47.860628 | orchestrator |  }, 2025-07-12 15:33:47.860654 | orchestrator |  "lvm_volumes": [ 2025-07-12 15:33:47.860672 | orchestrator |  { 2025-07-12 15:33:47.860689 | orchestrator |  "data": "osd-block-ed518422-90c3-5ab9-913f-91d667874e9d", 2025-07-12 15:33:47.860707 | orchestrator |  "data_vg": "ceph-ed518422-90c3-5ab9-913f-91d667874e9d" 2025-07-12 15:33:47.860723 | orchestrator |  }, 2025-07-12 15:33:47.860738 | orchestrator |  { 2025-07-12 15:33:47.860754 | orchestrator |  "data": "osd-block-66e431f6-efaf-5b66-8dd9-edbf314ce410", 2025-07-12 15:33:47.860770 | orchestrator |  "data_vg": "ceph-66e431f6-efaf-5b66-8dd9-edbf314ce410" 2025-07-12 15:33:47.860785 | orchestrator |  } 2025-07-12 15:33:47.860801 | orchestrator |  ] 2025-07-12 15:33:47.860817 | orchestrator |  } 2025-07-12 15:33:47.860833 | orchestrator | } 2025-07-12 15:33:47.860850 | orchestrator | 2025-07-12 15:33:47.860866 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-07-12 15:33:47.860883 | orchestrator | Saturday 12 July 2025 15:33:45 +0000 (0:00:00.190) 0:00:28.005 ********* 2025-07-12 15:33:47.860900 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-07-12 15:33:47.860916 | orchestrator | 2025-07-12 15:33:47.860934 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-07-12 15:33:47.860950 | orchestrator | 2025-07-12 15:33:47.860967 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-12 15:33:47.860984 | orchestrator | Saturday 12 July 2025 15:33:46 +0000 (0:00:01.123) 0:00:29.129 ********* 2025-07-12 15:33:47.861002 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-07-12 15:33:47.861020 | orchestrator | 2025-07-12 15:33:47.861038 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-12 15:33:47.861055 | orchestrator | Saturday 12 July 2025 15:33:46 +0000 (0:00:00.463) 0:00:29.593 ********* 2025-07-12 15:33:47.861073 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:33:47.861092 | orchestrator | 2025-07-12 15:33:47.861104 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:47.861115 | orchestrator | Saturday 12 July 2025 15:33:47 +0000 (0:00:00.659) 0:00:30.252 ********* 2025-07-12 15:33:47.861126 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-07-12 15:33:47.861147 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-07-12 15:33:47.861158 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-07-12 15:33:47.861168 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-07-12 15:33:47.861179 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-07-12 15:33:47.861189 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-07-12 15:33:47.861212 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-07-12 15:33:56.025453 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-07-12 15:33:56.025637 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-07-12 15:33:56.025656 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-07-12 15:33:56.025668 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-07-12 15:33:56.025679 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-07-12 15:33:56.025690 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-07-12 15:33:56.025701 | orchestrator | 2025-07-12 15:33:56.025713 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:56.025725 | orchestrator | Saturday 12 July 2025 15:33:47 +0000 (0:00:00.377) 0:00:30.630 ********* 2025-07-12 15:33:56.025735 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:33:56.025747 | orchestrator | 2025-07-12 15:33:56.025758 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:56.025769 | orchestrator | Saturday 12 July 2025 15:33:48 +0000 (0:00:00.212) 0:00:30.842 ********* 2025-07-12 15:33:56.025779 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:33:56.025790 | orchestrator | 2025-07-12 15:33:56.025800 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:56.025811 | orchestrator | Saturday 12 July 2025 15:33:48 +0000 (0:00:00.203) 0:00:31.046 ********* 2025-07-12 15:33:56.025821 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:33:56.025832 | orchestrator | 2025-07-12 15:33:56.025842 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:56.025853 | orchestrator | Saturday 12 July 2025 15:33:48 +0000 (0:00:00.200) 0:00:31.246 ********* 2025-07-12 15:33:56.025864 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:33:56.025874 | orchestrator | 2025-07-12 15:33:56.025884 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:56.025895 | orchestrator | Saturday 12 July 2025 15:33:48 +0000 (0:00:00.203) 0:00:31.449 ********* 2025-07-12 15:33:56.025906 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:33:56.025916 | orchestrator | 2025-07-12 15:33:56.025927 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:56.025937 | orchestrator | Saturday 12 July 2025 15:33:48 +0000 (0:00:00.202) 0:00:31.652 ********* 2025-07-12 15:33:56.025948 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:33:56.025958 | orchestrator | 2025-07-12 15:33:56.025970 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:56.025981 | orchestrator | Saturday 12 July 2025 15:33:49 +0000 (0:00:00.204) 0:00:31.856 ********* 2025-07-12 15:33:56.025991 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:33:56.026002 | orchestrator | 2025-07-12 15:33:56.026012 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:56.026101 | orchestrator | Saturday 12 July 2025 15:33:49 +0000 (0:00:00.203) 0:00:32.060 ********* 2025-07-12 15:33:56.026112 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:33:56.026123 | orchestrator | 2025-07-12 15:33:56.026133 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:56.026168 | orchestrator | Saturday 12 July 2025 15:33:49 +0000 (0:00:00.193) 0:00:32.253 ********* 2025-07-12 15:33:56.026180 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e) 2025-07-12 15:33:56.026192 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e) 2025-07-12 15:33:56.026202 | orchestrator | 2025-07-12 15:33:56.026213 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:56.026224 | orchestrator | Saturday 12 July 2025 15:33:50 +0000 (0:00:00.621) 0:00:32.875 ********* 2025-07-12 15:33:56.026235 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6698acfe-c205-405d-be66-12c19a56960d) 2025-07-12 15:33:56.026245 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6698acfe-c205-405d-be66-12c19a56960d) 2025-07-12 15:33:56.026256 | orchestrator | 2025-07-12 15:33:56.026266 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:56.026277 | orchestrator | Saturday 12 July 2025 15:33:50 +0000 (0:00:00.809) 0:00:33.684 ********* 2025-07-12 15:33:56.026288 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2d047699-b504-4740-af1d-648b929835be) 2025-07-12 15:33:56.026299 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2d047699-b504-4740-af1d-648b929835be) 2025-07-12 15:33:56.026309 | orchestrator | 2025-07-12 15:33:56.026337 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:56.026348 | orchestrator | Saturday 12 July 2025 15:33:51 +0000 (0:00:00.399) 0:00:34.084 ********* 2025-07-12 15:33:56.026358 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e2bb8cb1-296e-41d9-9659-79f1ba9bca2a) 2025-07-12 15:33:56.026376 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e2bb8cb1-296e-41d9-9659-79f1ba9bca2a) 2025-07-12 15:33:56.026387 | orchestrator | 2025-07-12 15:33:56.026398 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:33:56.026409 | orchestrator | Saturday 12 July 2025 15:33:51 +0000 (0:00:00.443) 0:00:34.528 ********* 2025-07-12 15:33:56.026419 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-12 15:33:56.026430 | orchestrator | 2025-07-12 15:33:56.026440 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:56.026451 | orchestrator | Saturday 12 July 2025 15:33:52 +0000 (0:00:00.347) 0:00:34.875 ********* 2025-07-12 15:33:56.026481 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-07-12 15:33:56.026493 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-07-12 15:33:56.026503 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-07-12 15:33:56.026514 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-07-12 15:33:56.026545 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-07-12 15:33:56.026556 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-07-12 15:33:56.026567 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-07-12 15:33:56.026578 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-07-12 15:33:56.026588 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-07-12 15:33:56.026599 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-07-12 15:33:56.026609 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-07-12 15:33:56.026620 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-07-12 15:33:56.026630 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-07-12 15:33:56.026650 | orchestrator | 2025-07-12 15:33:56.026661 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:56.026672 | orchestrator | Saturday 12 July 2025 15:33:52 +0000 (0:00:00.396) 0:00:35.272 ********* 2025-07-12 15:33:56.026682 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:33:56.026692 | orchestrator | 2025-07-12 15:33:56.026703 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:56.026714 | orchestrator | Saturday 12 July 2025 15:33:52 +0000 (0:00:00.209) 0:00:35.482 ********* 2025-07-12 15:33:56.026724 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:33:56.026735 | orchestrator | 2025-07-12 15:33:56.026745 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:56.026756 | orchestrator | Saturday 12 July 2025 15:33:52 +0000 (0:00:00.208) 0:00:35.690 ********* 2025-07-12 15:33:56.026766 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:33:56.026777 | orchestrator | 2025-07-12 15:33:56.026787 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:56.026798 | orchestrator | Saturday 12 July 2025 15:33:53 +0000 (0:00:00.199) 0:00:35.889 ********* 2025-07-12 15:33:56.026808 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:33:56.026819 | orchestrator | 2025-07-12 15:33:56.026829 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:56.026840 | orchestrator | Saturday 12 July 2025 15:33:53 +0000 (0:00:00.217) 0:00:36.107 ********* 2025-07-12 15:33:56.026850 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:33:56.026860 | orchestrator | 2025-07-12 15:33:56.026871 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:56.026881 | orchestrator | Saturday 12 July 2025 15:33:53 +0000 (0:00:00.207) 0:00:36.315 ********* 2025-07-12 15:33:56.026892 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:33:56.026902 | orchestrator | 2025-07-12 15:33:56.026912 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:56.026923 | orchestrator | Saturday 12 July 2025 15:33:54 +0000 (0:00:00.641) 0:00:36.956 ********* 2025-07-12 15:33:56.026933 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:33:56.026944 | orchestrator | 2025-07-12 15:33:56.026954 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:56.026965 | orchestrator | Saturday 12 July 2025 15:33:54 +0000 (0:00:00.213) 0:00:37.169 ********* 2025-07-12 15:33:56.026975 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:33:56.026985 | orchestrator | 2025-07-12 15:33:56.026996 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:56.027006 | orchestrator | Saturday 12 July 2025 15:33:54 +0000 (0:00:00.193) 0:00:37.362 ********* 2025-07-12 15:33:56.027017 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-07-12 15:33:56.027028 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-07-12 15:33:56.027038 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-07-12 15:33:56.027049 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-07-12 15:33:56.027060 | orchestrator | 2025-07-12 15:33:56.027070 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:56.027081 | orchestrator | Saturday 12 July 2025 15:33:55 +0000 (0:00:00.622) 0:00:37.985 ********* 2025-07-12 15:33:56.027092 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:33:56.027102 | orchestrator | 2025-07-12 15:33:56.027113 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:56.027123 | orchestrator | Saturday 12 July 2025 15:33:55 +0000 (0:00:00.200) 0:00:38.185 ********* 2025-07-12 15:33:56.027134 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:33:56.027144 | orchestrator | 2025-07-12 15:33:56.027155 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:56.027165 | orchestrator | Saturday 12 July 2025 15:33:55 +0000 (0:00:00.202) 0:00:38.388 ********* 2025-07-12 15:33:56.027175 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:33:56.027194 | orchestrator | 2025-07-12 15:33:56.027205 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:33:56.027215 | orchestrator | Saturday 12 July 2025 15:33:55 +0000 (0:00:00.199) 0:00:38.587 ********* 2025-07-12 15:33:56.027226 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:33:56.027236 | orchestrator | 2025-07-12 15:33:56.027247 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-07-12 15:33:56.027264 | orchestrator | Saturday 12 July 2025 15:33:56 +0000 (0:00:00.215) 0:00:38.802 ********* 2025-07-12 15:34:00.220410 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-07-12 15:34:00.220577 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-07-12 15:34:00.220594 | orchestrator | 2025-07-12 15:34:00.220607 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-07-12 15:34:00.220618 | orchestrator | Saturday 12 July 2025 15:33:56 +0000 (0:00:00.166) 0:00:38.969 ********* 2025-07-12 15:34:00.220629 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:34:00.220640 | orchestrator | 2025-07-12 15:34:00.220652 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-07-12 15:34:00.220663 | orchestrator | Saturday 12 July 2025 15:33:56 +0000 (0:00:00.127) 0:00:39.096 ********* 2025-07-12 15:34:00.220673 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:34:00.220684 | orchestrator | 2025-07-12 15:34:00.220695 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-07-12 15:34:00.220705 | orchestrator | Saturday 12 July 2025 15:33:56 +0000 (0:00:00.126) 0:00:39.223 ********* 2025-07-12 15:34:00.220716 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:34:00.220727 | orchestrator | 2025-07-12 15:34:00.220737 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-07-12 15:34:00.220748 | orchestrator | Saturday 12 July 2025 15:33:56 +0000 (0:00:00.126) 0:00:39.350 ********* 2025-07-12 15:34:00.220758 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:34:00.220770 | orchestrator | 2025-07-12 15:34:00.220781 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-07-12 15:34:00.220791 | orchestrator | Saturday 12 July 2025 15:33:56 +0000 (0:00:00.310) 0:00:39.661 ********* 2025-07-12 15:34:00.220802 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '98eaa118-ceae-5fd7-911b-5a5c065fb5e7'}}) 2025-07-12 15:34:00.220814 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd3106c13-92fd-5dcd-ba4d-74ce9f77b023'}}) 2025-07-12 15:34:00.220824 | orchestrator | 2025-07-12 15:34:00.220855 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-07-12 15:34:00.220867 | orchestrator | Saturday 12 July 2025 15:33:57 +0000 (0:00:00.182) 0:00:39.844 ********* 2025-07-12 15:34:00.220878 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '98eaa118-ceae-5fd7-911b-5a5c065fb5e7'}})  2025-07-12 15:34:00.220889 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd3106c13-92fd-5dcd-ba4d-74ce9f77b023'}})  2025-07-12 15:34:00.220900 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:34:00.220910 | orchestrator | 2025-07-12 15:34:00.220921 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-07-12 15:34:00.220932 | orchestrator | Saturday 12 July 2025 15:33:57 +0000 (0:00:00.161) 0:00:40.006 ********* 2025-07-12 15:34:00.220945 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '98eaa118-ceae-5fd7-911b-5a5c065fb5e7'}})  2025-07-12 15:34:00.220958 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd3106c13-92fd-5dcd-ba4d-74ce9f77b023'}})  2025-07-12 15:34:00.220970 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:34:00.220982 | orchestrator | 2025-07-12 15:34:00.220995 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-07-12 15:34:00.221008 | orchestrator | Saturday 12 July 2025 15:33:57 +0000 (0:00:00.148) 0:00:40.154 ********* 2025-07-12 15:34:00.221041 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '98eaa118-ceae-5fd7-911b-5a5c065fb5e7'}})  2025-07-12 15:34:00.221053 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd3106c13-92fd-5dcd-ba4d-74ce9f77b023'}})  2025-07-12 15:34:00.221066 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:34:00.221078 | orchestrator | 2025-07-12 15:34:00.221090 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-07-12 15:34:00.221102 | orchestrator | Saturday 12 July 2025 15:33:57 +0000 (0:00:00.154) 0:00:40.309 ********* 2025-07-12 15:34:00.221114 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:34:00.221127 | orchestrator | 2025-07-12 15:34:00.221139 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-07-12 15:34:00.221151 | orchestrator | Saturday 12 July 2025 15:33:57 +0000 (0:00:00.147) 0:00:40.457 ********* 2025-07-12 15:34:00.221164 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:34:00.221176 | orchestrator | 2025-07-12 15:34:00.221189 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-07-12 15:34:00.221201 | orchestrator | Saturday 12 July 2025 15:33:57 +0000 (0:00:00.147) 0:00:40.605 ********* 2025-07-12 15:34:00.221213 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:34:00.221225 | orchestrator | 2025-07-12 15:34:00.221237 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-07-12 15:34:00.221249 | orchestrator | Saturday 12 July 2025 15:33:57 +0000 (0:00:00.140) 0:00:40.745 ********* 2025-07-12 15:34:00.221269 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:34:00.221283 | orchestrator | 2025-07-12 15:34:00.221296 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-07-12 15:34:00.221307 | orchestrator | Saturday 12 July 2025 15:33:58 +0000 (0:00:00.148) 0:00:40.893 ********* 2025-07-12 15:34:00.221317 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:34:00.221327 | orchestrator | 2025-07-12 15:34:00.221338 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-07-12 15:34:00.221349 | orchestrator | Saturday 12 July 2025 15:33:58 +0000 (0:00:00.134) 0:00:41.028 ********* 2025-07-12 15:34:00.221360 | orchestrator | ok: [testbed-node-5] => { 2025-07-12 15:34:00.221371 | orchestrator |  "ceph_osd_devices": { 2025-07-12 15:34:00.221381 | orchestrator |  "sdb": { 2025-07-12 15:34:00.221393 | orchestrator |  "osd_lvm_uuid": "98eaa118-ceae-5fd7-911b-5a5c065fb5e7" 2025-07-12 15:34:00.221426 | orchestrator |  }, 2025-07-12 15:34:00.221439 | orchestrator |  "sdc": { 2025-07-12 15:34:00.221449 | orchestrator |  "osd_lvm_uuid": "d3106c13-92fd-5dcd-ba4d-74ce9f77b023" 2025-07-12 15:34:00.221460 | orchestrator |  } 2025-07-12 15:34:00.221471 | orchestrator |  } 2025-07-12 15:34:00.221482 | orchestrator | } 2025-07-12 15:34:00.221492 | orchestrator | 2025-07-12 15:34:00.221503 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-07-12 15:34:00.221514 | orchestrator | Saturday 12 July 2025 15:33:58 +0000 (0:00:00.137) 0:00:41.165 ********* 2025-07-12 15:34:00.221542 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:34:00.221553 | orchestrator | 2025-07-12 15:34:00.221563 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-07-12 15:34:00.221574 | orchestrator | Saturday 12 July 2025 15:33:58 +0000 (0:00:00.131) 0:00:41.296 ********* 2025-07-12 15:34:00.221585 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:34:00.221596 | orchestrator | 2025-07-12 15:34:00.221606 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-07-12 15:34:00.221617 | orchestrator | Saturday 12 July 2025 15:33:58 +0000 (0:00:00.326) 0:00:41.623 ********* 2025-07-12 15:34:00.221628 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:34:00.221638 | orchestrator | 2025-07-12 15:34:00.221649 | orchestrator | TASK [Print configuration data] ************************************************ 2025-07-12 15:34:00.221660 | orchestrator | Saturday 12 July 2025 15:33:58 +0000 (0:00:00.140) 0:00:41.764 ********* 2025-07-12 15:34:00.221679 | orchestrator | changed: [testbed-node-5] => { 2025-07-12 15:34:00.221690 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-07-12 15:34:00.221701 | orchestrator |  "ceph_osd_devices": { 2025-07-12 15:34:00.221712 | orchestrator |  "sdb": { 2025-07-12 15:34:00.221723 | orchestrator |  "osd_lvm_uuid": "98eaa118-ceae-5fd7-911b-5a5c065fb5e7" 2025-07-12 15:34:00.221734 | orchestrator |  }, 2025-07-12 15:34:00.221744 | orchestrator |  "sdc": { 2025-07-12 15:34:00.221755 | orchestrator |  "osd_lvm_uuid": "d3106c13-92fd-5dcd-ba4d-74ce9f77b023" 2025-07-12 15:34:00.221766 | orchestrator |  } 2025-07-12 15:34:00.221777 | orchestrator |  }, 2025-07-12 15:34:00.221787 | orchestrator |  "lvm_volumes": [ 2025-07-12 15:34:00.221798 | orchestrator |  { 2025-07-12 15:34:00.221809 | orchestrator |  "data": "osd-block-98eaa118-ceae-5fd7-911b-5a5c065fb5e7", 2025-07-12 15:34:00.221820 | orchestrator |  "data_vg": "ceph-98eaa118-ceae-5fd7-911b-5a5c065fb5e7" 2025-07-12 15:34:00.221830 | orchestrator |  }, 2025-07-12 15:34:00.221841 | orchestrator |  { 2025-07-12 15:34:00.221852 | orchestrator |  "data": "osd-block-d3106c13-92fd-5dcd-ba4d-74ce9f77b023", 2025-07-12 15:34:00.221863 | orchestrator |  "data_vg": "ceph-d3106c13-92fd-5dcd-ba4d-74ce9f77b023" 2025-07-12 15:34:00.221873 | orchestrator |  } 2025-07-12 15:34:00.221884 | orchestrator |  ] 2025-07-12 15:34:00.221895 | orchestrator |  } 2025-07-12 15:34:00.221906 | orchestrator | } 2025-07-12 15:34:00.221916 | orchestrator | 2025-07-12 15:34:00.221927 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-07-12 15:34:00.221938 | orchestrator | Saturday 12 July 2025 15:33:59 +0000 (0:00:00.216) 0:00:41.980 ********* 2025-07-12 15:34:00.221949 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-07-12 15:34:00.221960 | orchestrator | 2025-07-12 15:34:00.221970 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:34:00.221982 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-07-12 15:34:00.221994 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-07-12 15:34:00.222005 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-07-12 15:34:00.222075 | orchestrator | 2025-07-12 15:34:00.222091 | orchestrator | 2025-07-12 15:34:00.222101 | orchestrator | 2025-07-12 15:34:00.222112 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:34:00.222123 | orchestrator | Saturday 12 July 2025 15:34:00 +0000 (0:00:01.000) 0:00:42.981 ********* 2025-07-12 15:34:00.222134 | orchestrator | =============================================================================== 2025-07-12 15:34:00.222145 | orchestrator | Write configuration file ------------------------------------------------ 4.47s 2025-07-12 15:34:00.222155 | orchestrator | Add known partitions to the list of available block devices ------------- 1.38s 2025-07-12 15:34:00.222166 | orchestrator | Add known links to the list of available block devices ------------------ 1.11s 2025-07-12 15:34:00.222177 | orchestrator | Get initial list of available block devices ----------------------------- 1.10s 2025-07-12 15:34:00.222187 | orchestrator | Add known partitions to the list of available block devices ------------- 1.10s 2025-07-12 15:34:00.222198 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.94s 2025-07-12 15:34:00.222209 | orchestrator | Add known links to the list of available block devices ------------------ 0.81s 2025-07-12 15:34:00.222219 | orchestrator | Add known links to the list of available block devices ------------------ 0.76s 2025-07-12 15:34:00.222230 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2025-07-12 15:34:00.222241 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.68s 2025-07-12 15:34:00.222260 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2025-07-12 15:34:00.222271 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2025-07-12 15:34:00.222281 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.65s 2025-07-12 15:34:00.222292 | orchestrator | Add known partitions to the list of available block devices ------------- 0.64s 2025-07-12 15:34:00.222310 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2025-07-12 15:34:00.533112 | orchestrator | Add known partitions to the list of available block devices ------------- 0.62s 2025-07-12 15:34:00.533213 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2025-07-12 15:34:00.533227 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2025-07-12 15:34:00.533261 | orchestrator | Print DB devices -------------------------------------------------------- 0.61s 2025-07-12 15:34:00.533280 | orchestrator | Print configuration data ------------------------------------------------ 0.61s 2025-07-12 15:34:22.907258 | orchestrator | 2025-07-12 15:34:22 | INFO  | Task f69b1f46-2866-48dc-942a-a16b44187378 (sync inventory) is running in background. Output coming soon. 2025-07-12 15:34:41.013274 | orchestrator | 2025-07-12 15:34:24 | INFO  | Starting group_vars file reorganization 2025-07-12 15:34:41.013375 | orchestrator | 2025-07-12 15:34:24 | INFO  | Moved 0 file(s) to their respective directories 2025-07-12 15:34:41.013391 | orchestrator | 2025-07-12 15:34:24 | INFO  | Group_vars file reorganization completed 2025-07-12 15:34:41.013402 | orchestrator | 2025-07-12 15:34:26 | INFO  | Starting variable preparation from inventory 2025-07-12 15:34:41.013413 | orchestrator | 2025-07-12 15:34:27 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-07-12 15:34:41.013424 | orchestrator | 2025-07-12 15:34:27 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-07-12 15:34:41.013435 | orchestrator | 2025-07-12 15:34:27 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-07-12 15:34:41.013446 | orchestrator | 2025-07-12 15:34:27 | INFO  | 3 file(s) written, 6 host(s) processed 2025-07-12 15:34:41.013457 | orchestrator | 2025-07-12 15:34:27 | INFO  | Variable preparation completed 2025-07-12 15:34:41.013468 | orchestrator | 2025-07-12 15:34:28 | INFO  | Starting inventory overwrite handling 2025-07-12 15:34:41.013479 | orchestrator | 2025-07-12 15:34:28 | INFO  | Handling group overwrites in 99-overwrite 2025-07-12 15:34:41.013548 | orchestrator | 2025-07-12 15:34:28 | INFO  | Removing group frr:children from 60-generic 2025-07-12 15:34:41.013563 | orchestrator | 2025-07-12 15:34:28 | INFO  | Removing group storage:children from 50-kolla 2025-07-12 15:34:41.013574 | orchestrator | 2025-07-12 15:34:28 | INFO  | Removing group netbird:children from 50-infrastruture 2025-07-12 15:34:41.013585 | orchestrator | 2025-07-12 15:34:28 | INFO  | Removing group ceph-mds from 50-ceph 2025-07-12 15:34:41.013596 | orchestrator | 2025-07-12 15:34:28 | INFO  | Removing group ceph-rgw from 50-ceph 2025-07-12 15:34:41.013607 | orchestrator | 2025-07-12 15:34:28 | INFO  | Handling group overwrites in 20-roles 2025-07-12 15:34:41.013618 | orchestrator | 2025-07-12 15:34:28 | INFO  | Removing group k3s_node from 50-infrastruture 2025-07-12 15:34:41.013629 | orchestrator | 2025-07-12 15:34:28 | INFO  | Removed 6 group(s) in total 2025-07-12 15:34:41.013640 | orchestrator | 2025-07-12 15:34:28 | INFO  | Inventory overwrite handling completed 2025-07-12 15:34:41.013651 | orchestrator | 2025-07-12 15:34:29 | INFO  | Starting merge of inventory files 2025-07-12 15:34:41.013688 | orchestrator | 2025-07-12 15:34:29 | INFO  | Inventory files merged successfully 2025-07-12 15:34:41.013699 | orchestrator | 2025-07-12 15:34:33 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-07-12 15:34:41.013710 | orchestrator | 2025-07-12 15:34:39 | INFO  | Successfully wrote ClusterShell configuration 2025-07-12 15:34:41.013722 | orchestrator | [master 00176f8] 2025-07-12-15-34 2025-07-12 15:34:41.013733 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-07-12 15:34:43.042178 | orchestrator | 2025-07-12 15:34:43 | INFO  | Task e93bb9e0-c4f1-4b4c-930f-5c50c4114014 (ceph-create-lvm-devices) was prepared for execution. 2025-07-12 15:34:43.042296 | orchestrator | 2025-07-12 15:34:43 | INFO  | It takes a moment until task e93bb9e0-c4f1-4b4c-930f-5c50c4114014 (ceph-create-lvm-devices) has been started and output is visible here. 2025-07-12 15:34:53.963627 | orchestrator | 2025-07-12 15:34:53.963784 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-07-12 15:34:53.963815 | orchestrator | 2025-07-12 15:34:53.963836 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-12 15:34:53.963857 | orchestrator | Saturday 12 July 2025 15:34:47 +0000 (0:00:00.255) 0:00:00.255 ********* 2025-07-12 15:34:53.963877 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 15:34:53.963897 | orchestrator | 2025-07-12 15:34:53.963916 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-12 15:34:53.963935 | orchestrator | Saturday 12 July 2025 15:34:47 +0000 (0:00:00.220) 0:00:00.476 ********* 2025-07-12 15:34:53.963955 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:34:53.963976 | orchestrator | 2025-07-12 15:34:53.963996 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:34:53.964014 | orchestrator | Saturday 12 July 2025 15:34:47 +0000 (0:00:00.204) 0:00:00.681 ********* 2025-07-12 15:34:53.964034 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-07-12 15:34:53.964055 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-07-12 15:34:53.964075 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-07-12 15:34:53.964096 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-07-12 15:34:53.964117 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-07-12 15:34:53.964139 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-07-12 15:34:53.964160 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-07-12 15:34:53.964183 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-07-12 15:34:53.964205 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-07-12 15:34:53.964227 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-07-12 15:34:53.964247 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-07-12 15:34:53.964267 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-07-12 15:34:53.964287 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-07-12 15:34:53.964306 | orchestrator | 2025-07-12 15:34:53.964325 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:34:53.964344 | orchestrator | Saturday 12 July 2025 15:34:47 +0000 (0:00:00.367) 0:00:01.048 ********* 2025-07-12 15:34:53.964365 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:34:53.964384 | orchestrator | 2025-07-12 15:34:53.964404 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:34:53.964424 | orchestrator | Saturday 12 July 2025 15:34:48 +0000 (0:00:00.347) 0:00:01.396 ********* 2025-07-12 15:34:53.964511 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:34:53.964537 | orchestrator | 2025-07-12 15:34:53.964557 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:34:53.964576 | orchestrator | Saturday 12 July 2025 15:34:48 +0000 (0:00:00.190) 0:00:01.586 ********* 2025-07-12 15:34:53.964596 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:34:53.964613 | orchestrator | 2025-07-12 15:34:53.964630 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:34:53.964647 | orchestrator | Saturday 12 July 2025 15:34:48 +0000 (0:00:00.180) 0:00:01.766 ********* 2025-07-12 15:34:53.964665 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:34:53.964684 | orchestrator | 2025-07-12 15:34:53.964702 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:34:53.964721 | orchestrator | Saturday 12 July 2025 15:34:48 +0000 (0:00:00.173) 0:00:01.940 ********* 2025-07-12 15:34:53.964738 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:34:53.964756 | orchestrator | 2025-07-12 15:34:53.964776 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:34:53.964793 | orchestrator | Saturday 12 July 2025 15:34:48 +0000 (0:00:00.176) 0:00:02.116 ********* 2025-07-12 15:34:53.964809 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:34:53.964820 | orchestrator | 2025-07-12 15:34:53.964831 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:34:53.964842 | orchestrator | Saturday 12 July 2025 15:34:49 +0000 (0:00:00.180) 0:00:02.296 ********* 2025-07-12 15:34:53.964852 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:34:53.964862 | orchestrator | 2025-07-12 15:34:53.964873 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:34:53.964885 | orchestrator | Saturday 12 July 2025 15:34:49 +0000 (0:00:00.185) 0:00:02.482 ********* 2025-07-12 15:34:53.964895 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:34:53.964906 | orchestrator | 2025-07-12 15:34:53.964916 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:34:53.964927 | orchestrator | Saturday 12 July 2025 15:34:49 +0000 (0:00:00.165) 0:00:02.648 ********* 2025-07-12 15:34:53.964937 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b) 2025-07-12 15:34:53.964949 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b) 2025-07-12 15:34:53.964960 | orchestrator | 2025-07-12 15:34:53.964971 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:34:53.964981 | orchestrator | Saturday 12 July 2025 15:34:49 +0000 (0:00:00.379) 0:00:03.028 ********* 2025-07-12 15:34:53.965016 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c6699afa-886d-4139-8698-8a8fafe98984) 2025-07-12 15:34:53.965028 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c6699afa-886d-4139-8698-8a8fafe98984) 2025-07-12 15:34:53.965039 | orchestrator | 2025-07-12 15:34:53.965049 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:34:53.965060 | orchestrator | Saturday 12 July 2025 15:34:50 +0000 (0:00:00.361) 0:00:03.389 ********* 2025-07-12 15:34:53.965071 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4e5b43f9-5557-4a03-9895-8e671249b5b2) 2025-07-12 15:34:53.965081 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4e5b43f9-5557-4a03-9895-8e671249b5b2) 2025-07-12 15:34:53.965092 | orchestrator | 2025-07-12 15:34:53.965102 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:34:53.965113 | orchestrator | Saturday 12 July 2025 15:34:50 +0000 (0:00:00.500) 0:00:03.890 ********* 2025-07-12 15:34:53.965136 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0aec1d56-840e-4d62-87fc-8ad42993ed21) 2025-07-12 15:34:53.965148 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0aec1d56-840e-4d62-87fc-8ad42993ed21) 2025-07-12 15:34:53.965174 | orchestrator | 2025-07-12 15:34:53.965193 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:34:53.965209 | orchestrator | Saturday 12 July 2025 15:34:51 +0000 (0:00:00.520) 0:00:04.411 ********* 2025-07-12 15:34:53.965227 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-12 15:34:53.965244 | orchestrator | 2025-07-12 15:34:53.965261 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:34:53.965278 | orchestrator | Saturday 12 July 2025 15:34:51 +0000 (0:00:00.676) 0:00:05.087 ********* 2025-07-12 15:34:53.965294 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-07-12 15:34:53.965313 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-07-12 15:34:53.965330 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-07-12 15:34:53.965350 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-07-12 15:34:53.965368 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-07-12 15:34:53.965386 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-07-12 15:34:53.965397 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-07-12 15:34:53.965408 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-07-12 15:34:53.965418 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-07-12 15:34:53.965429 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-07-12 15:34:53.965439 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-07-12 15:34:53.965449 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-07-12 15:34:53.965460 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-07-12 15:34:53.965470 | orchestrator | 2025-07-12 15:34:53.965506 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:34:53.965519 | orchestrator | Saturday 12 July 2025 15:34:52 +0000 (0:00:00.402) 0:00:05.489 ********* 2025-07-12 15:34:53.965529 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:34:53.965540 | orchestrator | 2025-07-12 15:34:53.965550 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:34:53.965561 | orchestrator | Saturday 12 July 2025 15:34:52 +0000 (0:00:00.202) 0:00:05.692 ********* 2025-07-12 15:34:53.965571 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:34:53.965582 | orchestrator | 2025-07-12 15:34:53.965600 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:34:53.965617 | orchestrator | Saturday 12 July 2025 15:34:52 +0000 (0:00:00.215) 0:00:05.908 ********* 2025-07-12 15:34:53.965635 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:34:53.965653 | orchestrator | 2025-07-12 15:34:53.965671 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:34:53.965689 | orchestrator | Saturday 12 July 2025 15:34:52 +0000 (0:00:00.205) 0:00:06.114 ********* 2025-07-12 15:34:53.965706 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:34:53.965724 | orchestrator | 2025-07-12 15:34:53.965744 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:34:53.965764 | orchestrator | Saturday 12 July 2025 15:34:53 +0000 (0:00:00.214) 0:00:06.328 ********* 2025-07-12 15:34:53.965784 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:34:53.965803 | orchestrator | 2025-07-12 15:34:53.965823 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:34:53.965842 | orchestrator | Saturday 12 July 2025 15:34:53 +0000 (0:00:00.191) 0:00:06.519 ********* 2025-07-12 15:34:53.965861 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:34:53.965895 | orchestrator | 2025-07-12 15:34:53.965914 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:34:53.965932 | orchestrator | Saturday 12 July 2025 15:34:53 +0000 (0:00:00.203) 0:00:06.722 ********* 2025-07-12 15:34:53.965949 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:34:53.965967 | orchestrator | 2025-07-12 15:34:53.965985 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:34:53.966004 | orchestrator | Saturday 12 July 2025 15:34:53 +0000 (0:00:00.237) 0:00:06.960 ********* 2025-07-12 15:34:53.966131 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:01.815787 | orchestrator | 2025-07-12 15:35:01.815899 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:35:01.815917 | orchestrator | Saturday 12 July 2025 15:34:53 +0000 (0:00:00.201) 0:00:07.161 ********* 2025-07-12 15:35:01.815929 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-07-12 15:35:01.815942 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-07-12 15:35:01.815953 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-07-12 15:35:01.815964 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-07-12 15:35:01.815975 | orchestrator | 2025-07-12 15:35:01.815986 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:35:01.815997 | orchestrator | Saturday 12 July 2025 15:34:54 +0000 (0:00:01.023) 0:00:08.185 ********* 2025-07-12 15:35:01.816008 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:01.816019 | orchestrator | 2025-07-12 15:35:01.816030 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:35:01.816041 | orchestrator | Saturday 12 July 2025 15:34:55 +0000 (0:00:00.196) 0:00:08.382 ********* 2025-07-12 15:35:01.816052 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:01.816062 | orchestrator | 2025-07-12 15:35:01.816073 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:35:01.816084 | orchestrator | Saturday 12 July 2025 15:34:55 +0000 (0:00:00.209) 0:00:08.592 ********* 2025-07-12 15:35:01.816094 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:01.816105 | orchestrator | 2025-07-12 15:35:01.816116 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:35:01.816127 | orchestrator | Saturday 12 July 2025 15:34:55 +0000 (0:00:00.202) 0:00:08.794 ********* 2025-07-12 15:35:01.816137 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:01.816148 | orchestrator | 2025-07-12 15:35:01.816158 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-07-12 15:35:01.816169 | orchestrator | Saturday 12 July 2025 15:34:55 +0000 (0:00:00.191) 0:00:08.986 ********* 2025-07-12 15:35:01.816179 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:01.816190 | orchestrator | 2025-07-12 15:35:01.816201 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-07-12 15:35:01.816211 | orchestrator | Saturday 12 July 2025 15:34:55 +0000 (0:00:00.135) 0:00:09.121 ********* 2025-07-12 15:35:01.816223 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0c0189bb-8103-55ae-95fc-ac60d34dc15f'}}) 2025-07-12 15:35:01.816234 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2608adc8-8e22-540f-a74d-9f1d5d1ddc4f'}}) 2025-07-12 15:35:01.816245 | orchestrator | 2025-07-12 15:35:01.816256 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-07-12 15:35:01.816267 | orchestrator | Saturday 12 July 2025 15:34:56 +0000 (0:00:00.180) 0:00:09.302 ********* 2025-07-12 15:35:01.816278 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-0c0189bb-8103-55ae-95fc-ac60d34dc15f', 'data_vg': 'ceph-0c0189bb-8103-55ae-95fc-ac60d34dc15f'}) 2025-07-12 15:35:01.816291 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f', 'data_vg': 'ceph-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f'}) 2025-07-12 15:35:01.816302 | orchestrator | 2025-07-12 15:35:01.816313 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-07-12 15:35:01.816355 | orchestrator | Saturday 12 July 2025 15:34:58 +0000 (0:00:02.110) 0:00:11.413 ********* 2025-07-12 15:35:01.816369 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0c0189bb-8103-55ae-95fc-ac60d34dc15f', 'data_vg': 'ceph-0c0189bb-8103-55ae-95fc-ac60d34dc15f'})  2025-07-12 15:35:01.816383 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f', 'data_vg': 'ceph-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f'})  2025-07-12 15:35:01.816431 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:01.816456 | orchestrator | 2025-07-12 15:35:01.816507 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-07-12 15:35:01.816525 | orchestrator | Saturday 12 July 2025 15:34:58 +0000 (0:00:00.157) 0:00:11.570 ********* 2025-07-12 15:35:01.816543 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-0c0189bb-8103-55ae-95fc-ac60d34dc15f', 'data_vg': 'ceph-0c0189bb-8103-55ae-95fc-ac60d34dc15f'}) 2025-07-12 15:35:01.816563 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f', 'data_vg': 'ceph-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f'}) 2025-07-12 15:35:01.816582 | orchestrator | 2025-07-12 15:35:01.816601 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-07-12 15:35:01.816620 | orchestrator | Saturday 12 July 2025 15:34:59 +0000 (0:00:01.400) 0:00:12.970 ********* 2025-07-12 15:35:01.816634 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0c0189bb-8103-55ae-95fc-ac60d34dc15f', 'data_vg': 'ceph-0c0189bb-8103-55ae-95fc-ac60d34dc15f'})  2025-07-12 15:35:01.816648 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f', 'data_vg': 'ceph-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f'})  2025-07-12 15:35:01.816660 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:01.816671 | orchestrator | 2025-07-12 15:35:01.816681 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-07-12 15:35:01.816692 | orchestrator | Saturday 12 July 2025 15:34:59 +0000 (0:00:00.141) 0:00:13.112 ********* 2025-07-12 15:35:01.816703 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:01.816714 | orchestrator | 2025-07-12 15:35:01.816724 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-07-12 15:35:01.816756 | orchestrator | Saturday 12 July 2025 15:35:00 +0000 (0:00:00.137) 0:00:13.249 ********* 2025-07-12 15:35:01.816769 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0c0189bb-8103-55ae-95fc-ac60d34dc15f', 'data_vg': 'ceph-0c0189bb-8103-55ae-95fc-ac60d34dc15f'})  2025-07-12 15:35:01.816780 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f', 'data_vg': 'ceph-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f'})  2025-07-12 15:35:01.816791 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:01.816801 | orchestrator | 2025-07-12 15:35:01.816812 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-07-12 15:35:01.816823 | orchestrator | Saturday 12 July 2025 15:35:00 +0000 (0:00:00.342) 0:00:13.591 ********* 2025-07-12 15:35:01.816833 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:01.816844 | orchestrator | 2025-07-12 15:35:01.816854 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-07-12 15:35:01.816865 | orchestrator | Saturday 12 July 2025 15:35:00 +0000 (0:00:00.136) 0:00:13.728 ********* 2025-07-12 15:35:01.816876 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0c0189bb-8103-55ae-95fc-ac60d34dc15f', 'data_vg': 'ceph-0c0189bb-8103-55ae-95fc-ac60d34dc15f'})  2025-07-12 15:35:01.816886 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f', 'data_vg': 'ceph-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f'})  2025-07-12 15:35:01.816897 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:01.816907 | orchestrator | 2025-07-12 15:35:01.816918 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-07-12 15:35:01.816941 | orchestrator | Saturday 12 July 2025 15:35:00 +0000 (0:00:00.139) 0:00:13.868 ********* 2025-07-12 15:35:01.816952 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:01.816962 | orchestrator | 2025-07-12 15:35:01.816973 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-07-12 15:35:01.817003 | orchestrator | Saturday 12 July 2025 15:35:00 +0000 (0:00:00.132) 0:00:14.000 ********* 2025-07-12 15:35:01.817014 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0c0189bb-8103-55ae-95fc-ac60d34dc15f', 'data_vg': 'ceph-0c0189bb-8103-55ae-95fc-ac60d34dc15f'})  2025-07-12 15:35:01.817025 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f', 'data_vg': 'ceph-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f'})  2025-07-12 15:35:01.817036 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:01.817047 | orchestrator | 2025-07-12 15:35:01.817057 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-07-12 15:35:01.817068 | orchestrator | Saturday 12 July 2025 15:35:00 +0000 (0:00:00.160) 0:00:14.161 ********* 2025-07-12 15:35:01.817079 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:35:01.817090 | orchestrator | 2025-07-12 15:35:01.817100 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-07-12 15:35:01.817111 | orchestrator | Saturday 12 July 2025 15:35:01 +0000 (0:00:00.143) 0:00:14.305 ********* 2025-07-12 15:35:01.817122 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0c0189bb-8103-55ae-95fc-ac60d34dc15f', 'data_vg': 'ceph-0c0189bb-8103-55ae-95fc-ac60d34dc15f'})  2025-07-12 15:35:01.817132 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f', 'data_vg': 'ceph-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f'})  2025-07-12 15:35:01.817143 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:01.817160 | orchestrator | 2025-07-12 15:35:01.817177 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-07-12 15:35:01.817194 | orchestrator | Saturday 12 July 2025 15:35:01 +0000 (0:00:00.152) 0:00:14.457 ********* 2025-07-12 15:35:01.817211 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0c0189bb-8103-55ae-95fc-ac60d34dc15f', 'data_vg': 'ceph-0c0189bb-8103-55ae-95fc-ac60d34dc15f'})  2025-07-12 15:35:01.817228 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f', 'data_vg': 'ceph-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f'})  2025-07-12 15:35:01.817246 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:01.817267 | orchestrator | 2025-07-12 15:35:01.817285 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-07-12 15:35:01.817301 | orchestrator | Saturday 12 July 2025 15:35:01 +0000 (0:00:00.146) 0:00:14.603 ********* 2025-07-12 15:35:01.817312 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0c0189bb-8103-55ae-95fc-ac60d34dc15f', 'data_vg': 'ceph-0c0189bb-8103-55ae-95fc-ac60d34dc15f'})  2025-07-12 15:35:01.817323 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f', 'data_vg': 'ceph-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f'})  2025-07-12 15:35:01.817333 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:01.817344 | orchestrator | 2025-07-12 15:35:01.817355 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-07-12 15:35:01.817365 | orchestrator | Saturday 12 July 2025 15:35:01 +0000 (0:00:00.146) 0:00:14.749 ********* 2025-07-12 15:35:01.817376 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:01.817386 | orchestrator | 2025-07-12 15:35:01.817397 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-07-12 15:35:01.817408 | orchestrator | Saturday 12 July 2025 15:35:01 +0000 (0:00:00.134) 0:00:14.884 ********* 2025-07-12 15:35:01.817418 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:01.817432 | orchestrator | 2025-07-12 15:35:01.817468 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-07-12 15:35:07.869922 | orchestrator | Saturday 12 July 2025 15:35:01 +0000 (0:00:00.133) 0:00:15.017 ********* 2025-07-12 15:35:07.870076 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:07.870096 | orchestrator | 2025-07-12 15:35:07.870107 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-07-12 15:35:07.870117 | orchestrator | Saturday 12 July 2025 15:35:01 +0000 (0:00:00.135) 0:00:15.153 ********* 2025-07-12 15:35:07.870127 | orchestrator | ok: [testbed-node-3] => { 2025-07-12 15:35:07.870138 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-07-12 15:35:07.870148 | orchestrator | } 2025-07-12 15:35:07.870158 | orchestrator | 2025-07-12 15:35:07.870168 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-07-12 15:35:07.870178 | orchestrator | Saturday 12 July 2025 15:35:02 +0000 (0:00:00.321) 0:00:15.475 ********* 2025-07-12 15:35:07.870188 | orchestrator | ok: [testbed-node-3] => { 2025-07-12 15:35:07.870197 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-07-12 15:35:07.870207 | orchestrator | } 2025-07-12 15:35:07.870217 | orchestrator | 2025-07-12 15:35:07.870226 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-07-12 15:35:07.870236 | orchestrator | Saturday 12 July 2025 15:35:02 +0000 (0:00:00.142) 0:00:15.617 ********* 2025-07-12 15:35:07.870246 | orchestrator | ok: [testbed-node-3] => { 2025-07-12 15:35:07.870255 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-07-12 15:35:07.870265 | orchestrator | } 2025-07-12 15:35:07.870275 | orchestrator | 2025-07-12 15:35:07.870284 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-07-12 15:35:07.870294 | orchestrator | Saturday 12 July 2025 15:35:02 +0000 (0:00:00.135) 0:00:15.752 ********* 2025-07-12 15:35:07.870304 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:35:07.870313 | orchestrator | 2025-07-12 15:35:07.870323 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-07-12 15:35:07.870332 | orchestrator | Saturday 12 July 2025 15:35:03 +0000 (0:00:00.646) 0:00:16.399 ********* 2025-07-12 15:35:07.870342 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:35:07.870351 | orchestrator | 2025-07-12 15:35:07.870361 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-07-12 15:35:07.870371 | orchestrator | Saturday 12 July 2025 15:35:03 +0000 (0:00:00.504) 0:00:16.904 ********* 2025-07-12 15:35:07.870381 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:35:07.870390 | orchestrator | 2025-07-12 15:35:07.870400 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-07-12 15:35:07.870409 | orchestrator | Saturday 12 July 2025 15:35:04 +0000 (0:00:00.520) 0:00:17.424 ********* 2025-07-12 15:35:07.870419 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:35:07.870428 | orchestrator | 2025-07-12 15:35:07.870438 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-07-12 15:35:07.870448 | orchestrator | Saturday 12 July 2025 15:35:04 +0000 (0:00:00.146) 0:00:17.571 ********* 2025-07-12 15:35:07.870459 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:07.870491 | orchestrator | 2025-07-12 15:35:07.870503 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-07-12 15:35:07.870514 | orchestrator | Saturday 12 July 2025 15:35:04 +0000 (0:00:00.120) 0:00:17.691 ********* 2025-07-12 15:35:07.870525 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:07.870536 | orchestrator | 2025-07-12 15:35:07.870547 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-07-12 15:35:07.870558 | orchestrator | Saturday 12 July 2025 15:35:04 +0000 (0:00:00.112) 0:00:17.804 ********* 2025-07-12 15:35:07.870570 | orchestrator | ok: [testbed-node-3] => { 2025-07-12 15:35:07.870581 | orchestrator |  "vgs_report": { 2025-07-12 15:35:07.870592 | orchestrator |  "vg": [] 2025-07-12 15:35:07.870603 | orchestrator |  } 2025-07-12 15:35:07.870614 | orchestrator | } 2025-07-12 15:35:07.870625 | orchestrator | 2025-07-12 15:35:07.870636 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-07-12 15:35:07.870647 | orchestrator | Saturday 12 July 2025 15:35:04 +0000 (0:00:00.133) 0:00:17.938 ********* 2025-07-12 15:35:07.870684 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:07.870696 | orchestrator | 2025-07-12 15:35:07.870707 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-07-12 15:35:07.870731 | orchestrator | Saturday 12 July 2025 15:35:04 +0000 (0:00:00.122) 0:00:18.061 ********* 2025-07-12 15:35:07.870742 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:07.870751 | orchestrator | 2025-07-12 15:35:07.870770 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-07-12 15:35:07.870780 | orchestrator | Saturday 12 July 2025 15:35:04 +0000 (0:00:00.127) 0:00:18.188 ********* 2025-07-12 15:35:07.870790 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:07.870800 | orchestrator | 2025-07-12 15:35:07.870809 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-07-12 15:35:07.870819 | orchestrator | Saturday 12 July 2025 15:35:05 +0000 (0:00:00.327) 0:00:18.516 ********* 2025-07-12 15:35:07.870828 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:07.870838 | orchestrator | 2025-07-12 15:35:07.870847 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-07-12 15:35:07.870857 | orchestrator | Saturday 12 July 2025 15:35:05 +0000 (0:00:00.135) 0:00:18.651 ********* 2025-07-12 15:35:07.870866 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:07.870876 | orchestrator | 2025-07-12 15:35:07.870885 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-07-12 15:35:07.870895 | orchestrator | Saturday 12 July 2025 15:35:05 +0000 (0:00:00.134) 0:00:18.786 ********* 2025-07-12 15:35:07.870904 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:07.870914 | orchestrator | 2025-07-12 15:35:07.870923 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-07-12 15:35:07.870933 | orchestrator | Saturday 12 July 2025 15:35:05 +0000 (0:00:00.127) 0:00:18.913 ********* 2025-07-12 15:35:07.870942 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:07.870952 | orchestrator | 2025-07-12 15:35:07.870961 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-07-12 15:35:07.870971 | orchestrator | Saturday 12 July 2025 15:35:05 +0000 (0:00:00.136) 0:00:19.050 ********* 2025-07-12 15:35:07.870980 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:07.870990 | orchestrator | 2025-07-12 15:35:07.871000 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-07-12 15:35:07.871027 | orchestrator | Saturday 12 July 2025 15:35:05 +0000 (0:00:00.142) 0:00:19.192 ********* 2025-07-12 15:35:07.871038 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:07.871047 | orchestrator | 2025-07-12 15:35:07.871057 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-07-12 15:35:07.871067 | orchestrator | Saturday 12 July 2025 15:35:06 +0000 (0:00:00.129) 0:00:19.322 ********* 2025-07-12 15:35:07.871076 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:07.871086 | orchestrator | 2025-07-12 15:35:07.871095 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-07-12 15:35:07.871105 | orchestrator | Saturday 12 July 2025 15:35:06 +0000 (0:00:00.137) 0:00:19.459 ********* 2025-07-12 15:35:07.871115 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:07.871124 | orchestrator | 2025-07-12 15:35:07.871134 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-07-12 15:35:07.871143 | orchestrator | Saturday 12 July 2025 15:35:06 +0000 (0:00:00.135) 0:00:19.594 ********* 2025-07-12 15:35:07.871153 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:07.871162 | orchestrator | 2025-07-12 15:35:07.871172 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-07-12 15:35:07.871181 | orchestrator | Saturday 12 July 2025 15:35:06 +0000 (0:00:00.142) 0:00:19.737 ********* 2025-07-12 15:35:07.871191 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:07.871201 | orchestrator | 2025-07-12 15:35:07.871210 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-07-12 15:35:07.871220 | orchestrator | Saturday 12 July 2025 15:35:06 +0000 (0:00:00.124) 0:00:19.862 ********* 2025-07-12 15:35:07.871281 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:07.871292 | orchestrator | 2025-07-12 15:35:07.871301 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-07-12 15:35:07.871311 | orchestrator | Saturday 12 July 2025 15:35:06 +0000 (0:00:00.135) 0:00:19.998 ********* 2025-07-12 15:35:07.871322 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0c0189bb-8103-55ae-95fc-ac60d34dc15f', 'data_vg': 'ceph-0c0189bb-8103-55ae-95fc-ac60d34dc15f'})  2025-07-12 15:35:07.871333 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f', 'data_vg': 'ceph-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f'})  2025-07-12 15:35:07.871343 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:07.871353 | orchestrator | 2025-07-12 15:35:07.871362 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-07-12 15:35:07.871372 | orchestrator | Saturday 12 July 2025 15:35:06 +0000 (0:00:00.148) 0:00:20.146 ********* 2025-07-12 15:35:07.871381 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0c0189bb-8103-55ae-95fc-ac60d34dc15f', 'data_vg': 'ceph-0c0189bb-8103-55ae-95fc-ac60d34dc15f'})  2025-07-12 15:35:07.871391 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f', 'data_vg': 'ceph-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f'})  2025-07-12 15:35:07.871401 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:07.871410 | orchestrator | 2025-07-12 15:35:07.871419 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-07-12 15:35:07.871429 | orchestrator | Saturday 12 July 2025 15:35:07 +0000 (0:00:00.338) 0:00:20.484 ********* 2025-07-12 15:35:07.871438 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0c0189bb-8103-55ae-95fc-ac60d34dc15f', 'data_vg': 'ceph-0c0189bb-8103-55ae-95fc-ac60d34dc15f'})  2025-07-12 15:35:07.871448 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f', 'data_vg': 'ceph-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f'})  2025-07-12 15:35:07.871458 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:07.871467 | orchestrator | 2025-07-12 15:35:07.871541 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-07-12 15:35:07.871552 | orchestrator | Saturday 12 July 2025 15:35:07 +0000 (0:00:00.153) 0:00:20.638 ********* 2025-07-12 15:35:07.871561 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0c0189bb-8103-55ae-95fc-ac60d34dc15f', 'data_vg': 'ceph-0c0189bb-8103-55ae-95fc-ac60d34dc15f'})  2025-07-12 15:35:07.871571 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f', 'data_vg': 'ceph-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f'})  2025-07-12 15:35:07.871581 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:07.871590 | orchestrator | 2025-07-12 15:35:07.871600 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-07-12 15:35:07.871627 | orchestrator | Saturday 12 July 2025 15:35:07 +0000 (0:00:00.134) 0:00:20.772 ********* 2025-07-12 15:35:07.871637 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0c0189bb-8103-55ae-95fc-ac60d34dc15f', 'data_vg': 'ceph-0c0189bb-8103-55ae-95fc-ac60d34dc15f'})  2025-07-12 15:35:07.871647 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f', 'data_vg': 'ceph-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f'})  2025-07-12 15:35:07.871657 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:07.871666 | orchestrator | 2025-07-12 15:35:07.871676 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-07-12 15:35:07.871685 | orchestrator | Saturday 12 July 2025 15:35:07 +0000 (0:00:00.152) 0:00:20.925 ********* 2025-07-12 15:35:07.871699 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0c0189bb-8103-55ae-95fc-ac60d34dc15f', 'data_vg': 'ceph-0c0189bb-8103-55ae-95fc-ac60d34dc15f'})  2025-07-12 15:35:07.871724 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f', 'data_vg': 'ceph-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f'})  2025-07-12 15:35:13.075166 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:13.075269 | orchestrator | 2025-07-12 15:35:13.075285 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-07-12 15:35:13.075299 | orchestrator | Saturday 12 July 2025 15:35:07 +0000 (0:00:00.145) 0:00:21.071 ********* 2025-07-12 15:35:13.075311 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0c0189bb-8103-55ae-95fc-ac60d34dc15f', 'data_vg': 'ceph-0c0189bb-8103-55ae-95fc-ac60d34dc15f'})  2025-07-12 15:35:13.075323 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f', 'data_vg': 'ceph-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f'})  2025-07-12 15:35:13.075334 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:13.075345 | orchestrator | 2025-07-12 15:35:13.075356 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-07-12 15:35:13.075367 | orchestrator | Saturday 12 July 2025 15:35:08 +0000 (0:00:00.147) 0:00:21.219 ********* 2025-07-12 15:35:13.075378 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0c0189bb-8103-55ae-95fc-ac60d34dc15f', 'data_vg': 'ceph-0c0189bb-8103-55ae-95fc-ac60d34dc15f'})  2025-07-12 15:35:13.075389 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f', 'data_vg': 'ceph-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f'})  2025-07-12 15:35:13.075400 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:13.075411 | orchestrator | 2025-07-12 15:35:13.075421 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-07-12 15:35:13.075433 | orchestrator | Saturday 12 July 2025 15:35:08 +0000 (0:00:00.159) 0:00:21.379 ********* 2025-07-12 15:35:13.075444 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:35:13.075456 | orchestrator | 2025-07-12 15:35:13.075538 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-07-12 15:35:13.075558 | orchestrator | Saturday 12 July 2025 15:35:08 +0000 (0:00:00.488) 0:00:21.867 ********* 2025-07-12 15:35:13.075569 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:35:13.075579 | orchestrator | 2025-07-12 15:35:13.075590 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-07-12 15:35:13.075600 | orchestrator | Saturday 12 July 2025 15:35:09 +0000 (0:00:00.522) 0:00:22.390 ********* 2025-07-12 15:35:13.075610 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:35:13.075621 | orchestrator | 2025-07-12 15:35:13.075631 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-07-12 15:35:13.075642 | orchestrator | Saturday 12 July 2025 15:35:09 +0000 (0:00:00.125) 0:00:22.516 ********* 2025-07-12 15:35:13.075653 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-0c0189bb-8103-55ae-95fc-ac60d34dc15f', 'vg_name': 'ceph-0c0189bb-8103-55ae-95fc-ac60d34dc15f'}) 2025-07-12 15:35:13.075667 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f', 'vg_name': 'ceph-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f'}) 2025-07-12 15:35:13.075680 | orchestrator | 2025-07-12 15:35:13.075691 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-07-12 15:35:13.075704 | orchestrator | Saturday 12 July 2025 15:35:09 +0000 (0:00:00.167) 0:00:22.684 ********* 2025-07-12 15:35:13.075716 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0c0189bb-8103-55ae-95fc-ac60d34dc15f', 'data_vg': 'ceph-0c0189bb-8103-55ae-95fc-ac60d34dc15f'})  2025-07-12 15:35:13.075729 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f', 'data_vg': 'ceph-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f'})  2025-07-12 15:35:13.075742 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:13.075754 | orchestrator | 2025-07-12 15:35:13.075766 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-07-12 15:35:13.075778 | orchestrator | Saturday 12 July 2025 15:35:09 +0000 (0:00:00.166) 0:00:22.850 ********* 2025-07-12 15:35:13.075816 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0c0189bb-8103-55ae-95fc-ac60d34dc15f', 'data_vg': 'ceph-0c0189bb-8103-55ae-95fc-ac60d34dc15f'})  2025-07-12 15:35:13.075829 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f', 'data_vg': 'ceph-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f'})  2025-07-12 15:35:13.075842 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:13.075854 | orchestrator | 2025-07-12 15:35:13.075867 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-07-12 15:35:13.075879 | orchestrator | Saturday 12 July 2025 15:35:09 +0000 (0:00:00.353) 0:00:23.203 ********* 2025-07-12 15:35:13.075892 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0c0189bb-8103-55ae-95fc-ac60d34dc15f', 'data_vg': 'ceph-0c0189bb-8103-55ae-95fc-ac60d34dc15f'})  2025-07-12 15:35:13.075905 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f', 'data_vg': 'ceph-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f'})  2025-07-12 15:35:13.075917 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:35:13.075929 | orchestrator | 2025-07-12 15:35:13.075941 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-07-12 15:35:13.075972 | orchestrator | Saturday 12 July 2025 15:35:10 +0000 (0:00:00.151) 0:00:23.355 ********* 2025-07-12 15:35:13.075986 | orchestrator | ok: [testbed-node-3] => { 2025-07-12 15:35:13.075999 | orchestrator |  "lvm_report": { 2025-07-12 15:35:13.076010 | orchestrator |  "lv": [ 2025-07-12 15:35:13.076020 | orchestrator |  { 2025-07-12 15:35:13.076049 | orchestrator |  "lv_name": "osd-block-0c0189bb-8103-55ae-95fc-ac60d34dc15f", 2025-07-12 15:35:13.076061 | orchestrator |  "vg_name": "ceph-0c0189bb-8103-55ae-95fc-ac60d34dc15f" 2025-07-12 15:35:13.076072 | orchestrator |  }, 2025-07-12 15:35:13.076083 | orchestrator |  { 2025-07-12 15:35:13.076093 | orchestrator |  "lv_name": "osd-block-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f", 2025-07-12 15:35:13.076104 | orchestrator |  "vg_name": "ceph-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f" 2025-07-12 15:35:13.076114 | orchestrator |  } 2025-07-12 15:35:13.076125 | orchestrator |  ], 2025-07-12 15:35:13.076136 | orchestrator |  "pv": [ 2025-07-12 15:35:13.076147 | orchestrator |  { 2025-07-12 15:35:13.076157 | orchestrator |  "pv_name": "/dev/sdb", 2025-07-12 15:35:13.076168 | orchestrator |  "vg_name": "ceph-0c0189bb-8103-55ae-95fc-ac60d34dc15f" 2025-07-12 15:35:13.076179 | orchestrator |  }, 2025-07-12 15:35:13.076189 | orchestrator |  { 2025-07-12 15:35:13.076200 | orchestrator |  "pv_name": "/dev/sdc", 2025-07-12 15:35:13.076210 | orchestrator |  "vg_name": "ceph-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f" 2025-07-12 15:35:13.076221 | orchestrator |  } 2025-07-12 15:35:13.076231 | orchestrator |  ] 2025-07-12 15:35:13.076242 | orchestrator |  } 2025-07-12 15:35:13.076253 | orchestrator | } 2025-07-12 15:35:13.076264 | orchestrator | 2025-07-12 15:35:13.076274 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-07-12 15:35:13.076285 | orchestrator | 2025-07-12 15:35:13.076295 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-12 15:35:13.076306 | orchestrator | Saturday 12 July 2025 15:35:10 +0000 (0:00:00.300) 0:00:23.656 ********* 2025-07-12 15:35:13.076317 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-07-12 15:35:13.076328 | orchestrator | 2025-07-12 15:35:13.076338 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-12 15:35:13.076349 | orchestrator | Saturday 12 July 2025 15:35:10 +0000 (0:00:00.243) 0:00:23.900 ********* 2025-07-12 15:35:13.076359 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:35:13.076370 | orchestrator | 2025-07-12 15:35:13.076381 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:35:13.076399 | orchestrator | Saturday 12 July 2025 15:35:10 +0000 (0:00:00.230) 0:00:24.130 ********* 2025-07-12 15:35:13.076410 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-07-12 15:35:13.076421 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-07-12 15:35:13.076447 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-07-12 15:35:13.076458 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-07-12 15:35:13.076492 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-07-12 15:35:13.076504 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-07-12 15:35:13.076514 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-07-12 15:35:13.076525 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-07-12 15:35:13.076535 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-07-12 15:35:13.076546 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-07-12 15:35:13.076556 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-07-12 15:35:13.076567 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-07-12 15:35:13.076577 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-07-12 15:35:13.076587 | orchestrator | 2025-07-12 15:35:13.076598 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:35:13.076609 | orchestrator | Saturday 12 July 2025 15:35:11 +0000 (0:00:00.413) 0:00:24.543 ********* 2025-07-12 15:35:13.076619 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:13.076630 | orchestrator | 2025-07-12 15:35:13.076640 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:35:13.076651 | orchestrator | Saturday 12 July 2025 15:35:11 +0000 (0:00:00.195) 0:00:24.739 ********* 2025-07-12 15:35:13.076661 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:13.076672 | orchestrator | 2025-07-12 15:35:13.076682 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:35:13.076693 | orchestrator | Saturday 12 July 2025 15:35:11 +0000 (0:00:00.188) 0:00:24.927 ********* 2025-07-12 15:35:13.076703 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:13.076714 | orchestrator | 2025-07-12 15:35:13.076724 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:35:13.076735 | orchestrator | Saturday 12 July 2025 15:35:11 +0000 (0:00:00.177) 0:00:25.105 ********* 2025-07-12 15:35:13.076745 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:13.076756 | orchestrator | 2025-07-12 15:35:13.076766 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:35:13.076777 | orchestrator | Saturday 12 July 2025 15:35:12 +0000 (0:00:00.573) 0:00:25.678 ********* 2025-07-12 15:35:13.076787 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:13.076798 | orchestrator | 2025-07-12 15:35:13.076808 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:35:13.076819 | orchestrator | Saturday 12 July 2025 15:35:12 +0000 (0:00:00.197) 0:00:25.876 ********* 2025-07-12 15:35:13.076829 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:13.076840 | orchestrator | 2025-07-12 15:35:13.076850 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:35:13.076861 | orchestrator | Saturday 12 July 2025 15:35:12 +0000 (0:00:00.188) 0:00:26.064 ********* 2025-07-12 15:35:13.076871 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:13.076882 | orchestrator | 2025-07-12 15:35:13.076900 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:35:23.137531 | orchestrator | Saturday 12 July 2025 15:35:13 +0000 (0:00:00.212) 0:00:26.277 ********* 2025-07-12 15:35:23.137688 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:23.138415 | orchestrator | 2025-07-12 15:35:23.138438 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:35:23.138450 | orchestrator | Saturday 12 July 2025 15:35:13 +0000 (0:00:00.183) 0:00:26.461 ********* 2025-07-12 15:35:23.138478 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd) 2025-07-12 15:35:23.138491 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd) 2025-07-12 15:35:23.138502 | orchestrator | 2025-07-12 15:35:23.138513 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:35:23.138525 | orchestrator | Saturday 12 July 2025 15:35:13 +0000 (0:00:00.415) 0:00:26.876 ********* 2025-07-12 15:35:23.138536 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9415964e-ba41-448d-be5c-d5fc92ddea3f) 2025-07-12 15:35:23.138546 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9415964e-ba41-448d-be5c-d5fc92ddea3f) 2025-07-12 15:35:23.138557 | orchestrator | 2025-07-12 15:35:23.138568 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:35:23.138578 | orchestrator | Saturday 12 July 2025 15:35:14 +0000 (0:00:00.437) 0:00:27.313 ********* 2025-07-12 15:35:23.138589 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_df26c144-7e2c-487c-9e8f-effdfe3555dd) 2025-07-12 15:35:23.138600 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_df26c144-7e2c-487c-9e8f-effdfe3555dd) 2025-07-12 15:35:23.138610 | orchestrator | 2025-07-12 15:35:23.138621 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:35:23.138632 | orchestrator | Saturday 12 July 2025 15:35:14 +0000 (0:00:00.438) 0:00:27.751 ********* 2025-07-12 15:35:23.138642 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_80301f58-6d09-4d29-bcb1-b411833d1e96) 2025-07-12 15:35:23.138653 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_80301f58-6d09-4d29-bcb1-b411833d1e96) 2025-07-12 15:35:23.138664 | orchestrator | 2025-07-12 15:35:23.138675 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:35:23.138685 | orchestrator | Saturday 12 July 2025 15:35:14 +0000 (0:00:00.439) 0:00:28.191 ********* 2025-07-12 15:35:23.138696 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-12 15:35:23.138706 | orchestrator | 2025-07-12 15:35:23.138717 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:35:23.138728 | orchestrator | Saturday 12 July 2025 15:35:15 +0000 (0:00:00.316) 0:00:28.508 ********* 2025-07-12 15:35:23.138738 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-07-12 15:35:23.138749 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-07-12 15:35:23.138777 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-07-12 15:35:23.138789 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-07-12 15:35:23.138799 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-07-12 15:35:23.138809 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-07-12 15:35:23.138820 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-07-12 15:35:23.138830 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-07-12 15:35:23.138841 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-07-12 15:35:23.138851 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-07-12 15:35:23.138862 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-07-12 15:35:23.138884 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-07-12 15:35:23.138895 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-07-12 15:35:23.138906 | orchestrator | 2025-07-12 15:35:23.138916 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:35:23.138927 | orchestrator | Saturday 12 July 2025 15:35:15 +0000 (0:00:00.586) 0:00:29.094 ********* 2025-07-12 15:35:23.138938 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:23.138948 | orchestrator | 2025-07-12 15:35:23.138959 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:35:23.138969 | orchestrator | Saturday 12 July 2025 15:35:16 +0000 (0:00:00.192) 0:00:29.287 ********* 2025-07-12 15:35:23.138980 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:23.138990 | orchestrator | 2025-07-12 15:35:23.139001 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:35:23.139017 | orchestrator | Saturday 12 July 2025 15:35:16 +0000 (0:00:00.193) 0:00:29.481 ********* 2025-07-12 15:35:23.139027 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:23.139038 | orchestrator | 2025-07-12 15:35:23.139049 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:35:23.139059 | orchestrator | Saturday 12 July 2025 15:35:16 +0000 (0:00:00.188) 0:00:29.669 ********* 2025-07-12 15:35:23.139070 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:23.139081 | orchestrator | 2025-07-12 15:35:23.139110 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:35:23.139121 | orchestrator | Saturday 12 July 2025 15:35:16 +0000 (0:00:00.193) 0:00:29.863 ********* 2025-07-12 15:35:23.139132 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:23.139143 | orchestrator | 2025-07-12 15:35:23.139153 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:35:23.139164 | orchestrator | Saturday 12 July 2025 15:35:16 +0000 (0:00:00.183) 0:00:30.046 ********* 2025-07-12 15:35:23.139174 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:23.139185 | orchestrator | 2025-07-12 15:35:23.139195 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:35:23.139206 | orchestrator | Saturday 12 July 2025 15:35:17 +0000 (0:00:00.200) 0:00:30.247 ********* 2025-07-12 15:35:23.139216 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:23.139227 | orchestrator | 2025-07-12 15:35:23.139237 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:35:23.139248 | orchestrator | Saturday 12 July 2025 15:35:17 +0000 (0:00:00.214) 0:00:30.461 ********* 2025-07-12 15:35:23.139259 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:23.139269 | orchestrator | 2025-07-12 15:35:23.139279 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:35:23.139290 | orchestrator | Saturday 12 July 2025 15:35:17 +0000 (0:00:00.192) 0:00:30.654 ********* 2025-07-12 15:35:23.139301 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-07-12 15:35:23.139312 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-07-12 15:35:23.139322 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-07-12 15:35:23.139333 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-07-12 15:35:23.139343 | orchestrator | 2025-07-12 15:35:23.139354 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:35:23.139365 | orchestrator | Saturday 12 July 2025 15:35:18 +0000 (0:00:00.812) 0:00:31.467 ********* 2025-07-12 15:35:23.139375 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:23.139386 | orchestrator | 2025-07-12 15:35:23.139396 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:35:23.139407 | orchestrator | Saturday 12 July 2025 15:35:18 +0000 (0:00:00.197) 0:00:31.664 ********* 2025-07-12 15:35:23.139417 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:23.139428 | orchestrator | 2025-07-12 15:35:23.139449 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:35:23.139482 | orchestrator | Saturday 12 July 2025 15:35:18 +0000 (0:00:00.196) 0:00:31.860 ********* 2025-07-12 15:35:23.139494 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:23.139504 | orchestrator | 2025-07-12 15:35:23.139515 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:35:23.139526 | orchestrator | Saturday 12 July 2025 15:35:19 +0000 (0:00:00.603) 0:00:32.464 ********* 2025-07-12 15:35:23.139536 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:23.139547 | orchestrator | 2025-07-12 15:35:23.139558 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-07-12 15:35:23.139569 | orchestrator | Saturday 12 July 2025 15:35:19 +0000 (0:00:00.206) 0:00:32.671 ********* 2025-07-12 15:35:23.139579 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:23.139590 | orchestrator | 2025-07-12 15:35:23.139601 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-07-12 15:35:23.139611 | orchestrator | Saturday 12 July 2025 15:35:19 +0000 (0:00:00.133) 0:00:32.804 ********* 2025-07-12 15:35:23.139622 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ed518422-90c3-5ab9-913f-91d667874e9d'}}) 2025-07-12 15:35:23.139633 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '66e431f6-efaf-5b66-8dd9-edbf314ce410'}}) 2025-07-12 15:35:23.139644 | orchestrator | 2025-07-12 15:35:23.139654 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-07-12 15:35:23.139665 | orchestrator | Saturday 12 July 2025 15:35:19 +0000 (0:00:00.189) 0:00:32.994 ********* 2025-07-12 15:35:23.139676 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ed518422-90c3-5ab9-913f-91d667874e9d', 'data_vg': 'ceph-ed518422-90c3-5ab9-913f-91d667874e9d'}) 2025-07-12 15:35:23.139688 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-66e431f6-efaf-5b66-8dd9-edbf314ce410', 'data_vg': 'ceph-66e431f6-efaf-5b66-8dd9-edbf314ce410'}) 2025-07-12 15:35:23.139698 | orchestrator | 2025-07-12 15:35:23.139709 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-07-12 15:35:23.139720 | orchestrator | Saturday 12 July 2025 15:35:21 +0000 (0:00:01.882) 0:00:34.877 ********* 2025-07-12 15:35:23.139730 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ed518422-90c3-5ab9-913f-91d667874e9d', 'data_vg': 'ceph-ed518422-90c3-5ab9-913f-91d667874e9d'})  2025-07-12 15:35:23.139742 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-66e431f6-efaf-5b66-8dd9-edbf314ce410', 'data_vg': 'ceph-66e431f6-efaf-5b66-8dd9-edbf314ce410'})  2025-07-12 15:35:23.139753 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:23.139764 | orchestrator | 2025-07-12 15:35:23.139774 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-07-12 15:35:23.139785 | orchestrator | Saturday 12 July 2025 15:35:21 +0000 (0:00:00.154) 0:00:35.031 ********* 2025-07-12 15:35:23.139801 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ed518422-90c3-5ab9-913f-91d667874e9d', 'data_vg': 'ceph-ed518422-90c3-5ab9-913f-91d667874e9d'}) 2025-07-12 15:35:23.139812 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-66e431f6-efaf-5b66-8dd9-edbf314ce410', 'data_vg': 'ceph-66e431f6-efaf-5b66-8dd9-edbf314ce410'}) 2025-07-12 15:35:23.139823 | orchestrator | 2025-07-12 15:35:23.139841 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-07-12 15:35:28.684629 | orchestrator | Saturday 12 July 2025 15:35:23 +0000 (0:00:01.303) 0:00:36.335 ********* 2025-07-12 15:35:28.684776 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ed518422-90c3-5ab9-913f-91d667874e9d', 'data_vg': 'ceph-ed518422-90c3-5ab9-913f-91d667874e9d'})  2025-07-12 15:35:28.684797 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-66e431f6-efaf-5b66-8dd9-edbf314ce410', 'data_vg': 'ceph-66e431f6-efaf-5b66-8dd9-edbf314ce410'})  2025-07-12 15:35:28.684872 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:28.684890 | orchestrator | 2025-07-12 15:35:28.684909 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-07-12 15:35:28.684927 | orchestrator | Saturday 12 July 2025 15:35:23 +0000 (0:00:00.160) 0:00:36.495 ********* 2025-07-12 15:35:28.684944 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:28.684961 | orchestrator | 2025-07-12 15:35:28.684977 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-07-12 15:35:28.684994 | orchestrator | Saturday 12 July 2025 15:35:23 +0000 (0:00:00.136) 0:00:36.631 ********* 2025-07-12 15:35:28.685012 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ed518422-90c3-5ab9-913f-91d667874e9d', 'data_vg': 'ceph-ed518422-90c3-5ab9-913f-91d667874e9d'})  2025-07-12 15:35:28.685029 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-66e431f6-efaf-5b66-8dd9-edbf314ce410', 'data_vg': 'ceph-66e431f6-efaf-5b66-8dd9-edbf314ce410'})  2025-07-12 15:35:28.685047 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:28.685067 | orchestrator | 2025-07-12 15:35:28.685085 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-07-12 15:35:28.685103 | orchestrator | Saturday 12 July 2025 15:35:23 +0000 (0:00:00.159) 0:00:36.791 ********* 2025-07-12 15:35:28.685116 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:28.685128 | orchestrator | 2025-07-12 15:35:28.685140 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-07-12 15:35:28.685152 | orchestrator | Saturday 12 July 2025 15:35:23 +0000 (0:00:00.136) 0:00:36.927 ********* 2025-07-12 15:35:28.685164 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ed518422-90c3-5ab9-913f-91d667874e9d', 'data_vg': 'ceph-ed518422-90c3-5ab9-913f-91d667874e9d'})  2025-07-12 15:35:28.685176 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-66e431f6-efaf-5b66-8dd9-edbf314ce410', 'data_vg': 'ceph-66e431f6-efaf-5b66-8dd9-edbf314ce410'})  2025-07-12 15:35:28.685188 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:28.685200 | orchestrator | 2025-07-12 15:35:28.685212 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-07-12 15:35:28.685224 | orchestrator | Saturday 12 July 2025 15:35:23 +0000 (0:00:00.156) 0:00:37.084 ********* 2025-07-12 15:35:28.685236 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:28.685248 | orchestrator | 2025-07-12 15:35:28.685260 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-07-12 15:35:28.685271 | orchestrator | Saturday 12 July 2025 15:35:24 +0000 (0:00:00.340) 0:00:37.424 ********* 2025-07-12 15:35:28.685283 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ed518422-90c3-5ab9-913f-91d667874e9d', 'data_vg': 'ceph-ed518422-90c3-5ab9-913f-91d667874e9d'})  2025-07-12 15:35:28.685296 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-66e431f6-efaf-5b66-8dd9-edbf314ce410', 'data_vg': 'ceph-66e431f6-efaf-5b66-8dd9-edbf314ce410'})  2025-07-12 15:35:28.685308 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:28.685320 | orchestrator | 2025-07-12 15:35:28.685332 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-07-12 15:35:28.685343 | orchestrator | Saturday 12 July 2025 15:35:24 +0000 (0:00:00.154) 0:00:37.579 ********* 2025-07-12 15:35:28.685355 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:35:28.685367 | orchestrator | 2025-07-12 15:35:28.685379 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-07-12 15:35:28.685391 | orchestrator | Saturday 12 July 2025 15:35:24 +0000 (0:00:00.137) 0:00:37.717 ********* 2025-07-12 15:35:28.685404 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ed518422-90c3-5ab9-913f-91d667874e9d', 'data_vg': 'ceph-ed518422-90c3-5ab9-913f-91d667874e9d'})  2025-07-12 15:35:28.685415 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-66e431f6-efaf-5b66-8dd9-edbf314ce410', 'data_vg': 'ceph-66e431f6-efaf-5b66-8dd9-edbf314ce410'})  2025-07-12 15:35:28.685427 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:28.685451 | orchestrator | 2025-07-12 15:35:28.685498 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-07-12 15:35:28.685509 | orchestrator | Saturday 12 July 2025 15:35:24 +0000 (0:00:00.155) 0:00:37.872 ********* 2025-07-12 15:35:28.685520 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ed518422-90c3-5ab9-913f-91d667874e9d', 'data_vg': 'ceph-ed518422-90c3-5ab9-913f-91d667874e9d'})  2025-07-12 15:35:28.685532 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-66e431f6-efaf-5b66-8dd9-edbf314ce410', 'data_vg': 'ceph-66e431f6-efaf-5b66-8dd9-edbf314ce410'})  2025-07-12 15:35:28.685542 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:28.685552 | orchestrator | 2025-07-12 15:35:28.685563 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-07-12 15:35:28.685574 | orchestrator | Saturday 12 July 2025 15:35:24 +0000 (0:00:00.159) 0:00:38.032 ********* 2025-07-12 15:35:28.685607 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ed518422-90c3-5ab9-913f-91d667874e9d', 'data_vg': 'ceph-ed518422-90c3-5ab9-913f-91d667874e9d'})  2025-07-12 15:35:28.685620 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-66e431f6-efaf-5b66-8dd9-edbf314ce410', 'data_vg': 'ceph-66e431f6-efaf-5b66-8dd9-edbf314ce410'})  2025-07-12 15:35:28.685639 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:28.685656 | orchestrator | 2025-07-12 15:35:28.685674 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-07-12 15:35:28.685694 | orchestrator | Saturday 12 July 2025 15:35:24 +0000 (0:00:00.146) 0:00:38.179 ********* 2025-07-12 15:35:28.685713 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:28.685731 | orchestrator | 2025-07-12 15:35:28.685744 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-07-12 15:35:28.685755 | orchestrator | Saturday 12 July 2025 15:35:25 +0000 (0:00:00.145) 0:00:38.324 ********* 2025-07-12 15:35:28.685765 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:28.685776 | orchestrator | 2025-07-12 15:35:28.685786 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-07-12 15:35:28.685797 | orchestrator | Saturday 12 July 2025 15:35:25 +0000 (0:00:00.136) 0:00:38.460 ********* 2025-07-12 15:35:28.685807 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:28.685817 | orchestrator | 2025-07-12 15:35:28.685828 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-07-12 15:35:28.685838 | orchestrator | Saturday 12 July 2025 15:35:25 +0000 (0:00:00.129) 0:00:38.590 ********* 2025-07-12 15:35:28.685848 | orchestrator | ok: [testbed-node-4] => { 2025-07-12 15:35:28.685859 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-07-12 15:35:28.685870 | orchestrator | } 2025-07-12 15:35:28.685881 | orchestrator | 2025-07-12 15:35:28.685891 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-07-12 15:35:28.685901 | orchestrator | Saturday 12 July 2025 15:35:25 +0000 (0:00:00.139) 0:00:38.730 ********* 2025-07-12 15:35:28.685912 | orchestrator | ok: [testbed-node-4] => { 2025-07-12 15:35:28.685922 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-07-12 15:35:28.685933 | orchestrator | } 2025-07-12 15:35:28.685943 | orchestrator | 2025-07-12 15:35:28.685954 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-07-12 15:35:28.685964 | orchestrator | Saturday 12 July 2025 15:35:25 +0000 (0:00:00.132) 0:00:38.862 ********* 2025-07-12 15:35:28.685975 | orchestrator | ok: [testbed-node-4] => { 2025-07-12 15:35:28.685986 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-07-12 15:35:28.685996 | orchestrator | } 2025-07-12 15:35:28.686007 | orchestrator | 2025-07-12 15:35:28.686063 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-07-12 15:35:28.686078 | orchestrator | Saturday 12 July 2025 15:35:25 +0000 (0:00:00.139) 0:00:39.002 ********* 2025-07-12 15:35:28.686088 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:35:28.686099 | orchestrator | 2025-07-12 15:35:28.686110 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-07-12 15:35:28.686133 | orchestrator | Saturday 12 July 2025 15:35:26 +0000 (0:00:00.758) 0:00:39.760 ********* 2025-07-12 15:35:28.686143 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:35:28.686154 | orchestrator | 2025-07-12 15:35:28.686165 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-07-12 15:35:28.686175 | orchestrator | Saturday 12 July 2025 15:35:27 +0000 (0:00:00.556) 0:00:40.317 ********* 2025-07-12 15:35:28.686186 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:35:28.686196 | orchestrator | 2025-07-12 15:35:28.686206 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-07-12 15:35:28.686217 | orchestrator | Saturday 12 July 2025 15:35:27 +0000 (0:00:00.506) 0:00:40.823 ********* 2025-07-12 15:35:28.686227 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:35:28.686238 | orchestrator | 2025-07-12 15:35:28.686248 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-07-12 15:35:28.686259 | orchestrator | Saturday 12 July 2025 15:35:27 +0000 (0:00:00.145) 0:00:40.969 ********* 2025-07-12 15:35:28.686269 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:28.686279 | orchestrator | 2025-07-12 15:35:28.686307 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-07-12 15:35:28.686318 | orchestrator | Saturday 12 July 2025 15:35:27 +0000 (0:00:00.114) 0:00:41.084 ********* 2025-07-12 15:35:28.686329 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:28.686339 | orchestrator | 2025-07-12 15:35:28.686350 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-07-12 15:35:28.686360 | orchestrator | Saturday 12 July 2025 15:35:27 +0000 (0:00:00.109) 0:00:41.193 ********* 2025-07-12 15:35:28.686370 | orchestrator | ok: [testbed-node-4] => { 2025-07-12 15:35:28.686381 | orchestrator |  "vgs_report": { 2025-07-12 15:35:28.686392 | orchestrator |  "vg": [] 2025-07-12 15:35:28.686402 | orchestrator |  } 2025-07-12 15:35:28.686413 | orchestrator | } 2025-07-12 15:35:28.686424 | orchestrator | 2025-07-12 15:35:28.686434 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-07-12 15:35:28.686445 | orchestrator | Saturday 12 July 2025 15:35:28 +0000 (0:00:00.138) 0:00:41.332 ********* 2025-07-12 15:35:28.686477 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:28.686491 | orchestrator | 2025-07-12 15:35:28.686501 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-07-12 15:35:28.686512 | orchestrator | Saturday 12 July 2025 15:35:28 +0000 (0:00:00.133) 0:00:41.466 ********* 2025-07-12 15:35:28.686522 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:28.686532 | orchestrator | 2025-07-12 15:35:28.686543 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-07-12 15:35:28.686553 | orchestrator | Saturday 12 July 2025 15:35:28 +0000 (0:00:00.134) 0:00:41.601 ********* 2025-07-12 15:35:28.686568 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:28.686584 | orchestrator | 2025-07-12 15:35:28.686602 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-07-12 15:35:28.686625 | orchestrator | Saturday 12 July 2025 15:35:28 +0000 (0:00:00.137) 0:00:41.739 ********* 2025-07-12 15:35:28.686650 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:28.686666 | orchestrator | 2025-07-12 15:35:28.686684 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-07-12 15:35:28.686714 | orchestrator | Saturday 12 July 2025 15:35:28 +0000 (0:00:00.143) 0:00:41.882 ********* 2025-07-12 15:35:33.264015 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:33.264123 | orchestrator | 2025-07-12 15:35:33.264140 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-07-12 15:35:33.264154 | orchestrator | Saturday 12 July 2025 15:35:28 +0000 (0:00:00.126) 0:00:42.008 ********* 2025-07-12 15:35:33.264165 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:33.264176 | orchestrator | 2025-07-12 15:35:33.264187 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-07-12 15:35:33.264198 | orchestrator | Saturday 12 July 2025 15:35:29 +0000 (0:00:00.330) 0:00:42.339 ********* 2025-07-12 15:35:33.264235 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:33.264246 | orchestrator | 2025-07-12 15:35:33.264257 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-07-12 15:35:33.264267 | orchestrator | Saturday 12 July 2025 15:35:29 +0000 (0:00:00.132) 0:00:42.472 ********* 2025-07-12 15:35:33.264278 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:33.264288 | orchestrator | 2025-07-12 15:35:33.264299 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-07-12 15:35:33.264310 | orchestrator | Saturday 12 July 2025 15:35:29 +0000 (0:00:00.132) 0:00:42.605 ********* 2025-07-12 15:35:33.264320 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:33.264331 | orchestrator | 2025-07-12 15:35:33.264341 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-07-12 15:35:33.264352 | orchestrator | Saturday 12 July 2025 15:35:29 +0000 (0:00:00.138) 0:00:42.743 ********* 2025-07-12 15:35:33.264362 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:33.264373 | orchestrator | 2025-07-12 15:35:33.264384 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-07-12 15:35:33.264394 | orchestrator | Saturday 12 July 2025 15:35:29 +0000 (0:00:00.126) 0:00:42.870 ********* 2025-07-12 15:35:33.264405 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:33.264415 | orchestrator | 2025-07-12 15:35:33.264426 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-07-12 15:35:33.264437 | orchestrator | Saturday 12 July 2025 15:35:29 +0000 (0:00:00.130) 0:00:43.000 ********* 2025-07-12 15:35:33.264447 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:33.264496 | orchestrator | 2025-07-12 15:35:33.264507 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-07-12 15:35:33.264519 | orchestrator | Saturday 12 July 2025 15:35:29 +0000 (0:00:00.132) 0:00:43.132 ********* 2025-07-12 15:35:33.264533 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:33.264545 | orchestrator | 2025-07-12 15:35:33.264557 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-07-12 15:35:33.264570 | orchestrator | Saturday 12 July 2025 15:35:30 +0000 (0:00:00.138) 0:00:43.271 ********* 2025-07-12 15:35:33.264581 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:33.264593 | orchestrator | 2025-07-12 15:35:33.264605 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-07-12 15:35:33.264618 | orchestrator | Saturday 12 July 2025 15:35:30 +0000 (0:00:00.137) 0:00:43.408 ********* 2025-07-12 15:35:33.264631 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ed518422-90c3-5ab9-913f-91d667874e9d', 'data_vg': 'ceph-ed518422-90c3-5ab9-913f-91d667874e9d'})  2025-07-12 15:35:33.264645 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-66e431f6-efaf-5b66-8dd9-edbf314ce410', 'data_vg': 'ceph-66e431f6-efaf-5b66-8dd9-edbf314ce410'})  2025-07-12 15:35:33.264657 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:33.264669 | orchestrator | 2025-07-12 15:35:33.264682 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-07-12 15:35:33.264694 | orchestrator | Saturday 12 July 2025 15:35:30 +0000 (0:00:00.144) 0:00:43.553 ********* 2025-07-12 15:35:33.264706 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ed518422-90c3-5ab9-913f-91d667874e9d', 'data_vg': 'ceph-ed518422-90c3-5ab9-913f-91d667874e9d'})  2025-07-12 15:35:33.264718 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-66e431f6-efaf-5b66-8dd9-edbf314ce410', 'data_vg': 'ceph-66e431f6-efaf-5b66-8dd9-edbf314ce410'})  2025-07-12 15:35:33.264731 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:33.264743 | orchestrator | 2025-07-12 15:35:33.264755 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-07-12 15:35:33.264767 | orchestrator | Saturday 12 July 2025 15:35:30 +0000 (0:00:00.152) 0:00:43.706 ********* 2025-07-12 15:35:33.264779 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ed518422-90c3-5ab9-913f-91d667874e9d', 'data_vg': 'ceph-ed518422-90c3-5ab9-913f-91d667874e9d'})  2025-07-12 15:35:33.264799 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-66e431f6-efaf-5b66-8dd9-edbf314ce410', 'data_vg': 'ceph-66e431f6-efaf-5b66-8dd9-edbf314ce410'})  2025-07-12 15:35:33.264811 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:33.264823 | orchestrator | 2025-07-12 15:35:33.264835 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-07-12 15:35:33.264847 | orchestrator | Saturday 12 July 2025 15:35:30 +0000 (0:00:00.157) 0:00:43.863 ********* 2025-07-12 15:35:33.264877 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ed518422-90c3-5ab9-913f-91d667874e9d', 'data_vg': 'ceph-ed518422-90c3-5ab9-913f-91d667874e9d'})  2025-07-12 15:35:33.264888 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-66e431f6-efaf-5b66-8dd9-edbf314ce410', 'data_vg': 'ceph-66e431f6-efaf-5b66-8dd9-edbf314ce410'})  2025-07-12 15:35:33.264899 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:33.264910 | orchestrator | 2025-07-12 15:35:33.264920 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-07-12 15:35:33.264950 | orchestrator | Saturday 12 July 2025 15:35:30 +0000 (0:00:00.328) 0:00:44.192 ********* 2025-07-12 15:35:33.264962 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ed518422-90c3-5ab9-913f-91d667874e9d', 'data_vg': 'ceph-ed518422-90c3-5ab9-913f-91d667874e9d'})  2025-07-12 15:35:33.264973 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-66e431f6-efaf-5b66-8dd9-edbf314ce410', 'data_vg': 'ceph-66e431f6-efaf-5b66-8dd9-edbf314ce410'})  2025-07-12 15:35:33.264983 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:33.264994 | orchestrator | 2025-07-12 15:35:33.265005 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-07-12 15:35:33.265015 | orchestrator | Saturday 12 July 2025 15:35:31 +0000 (0:00:00.158) 0:00:44.351 ********* 2025-07-12 15:35:33.265026 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ed518422-90c3-5ab9-913f-91d667874e9d', 'data_vg': 'ceph-ed518422-90c3-5ab9-913f-91d667874e9d'})  2025-07-12 15:35:33.265037 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-66e431f6-efaf-5b66-8dd9-edbf314ce410', 'data_vg': 'ceph-66e431f6-efaf-5b66-8dd9-edbf314ce410'})  2025-07-12 15:35:33.265048 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:33.265058 | orchestrator | 2025-07-12 15:35:33.265069 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-07-12 15:35:33.265080 | orchestrator | Saturday 12 July 2025 15:35:31 +0000 (0:00:00.152) 0:00:44.503 ********* 2025-07-12 15:35:33.265090 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ed518422-90c3-5ab9-913f-91d667874e9d', 'data_vg': 'ceph-ed518422-90c3-5ab9-913f-91d667874e9d'})  2025-07-12 15:35:33.265101 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-66e431f6-efaf-5b66-8dd9-edbf314ce410', 'data_vg': 'ceph-66e431f6-efaf-5b66-8dd9-edbf314ce410'})  2025-07-12 15:35:33.265112 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:33.265122 | orchestrator | 2025-07-12 15:35:33.265133 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-07-12 15:35:33.265144 | orchestrator | Saturday 12 July 2025 15:35:31 +0000 (0:00:00.143) 0:00:44.647 ********* 2025-07-12 15:35:33.265155 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ed518422-90c3-5ab9-913f-91d667874e9d', 'data_vg': 'ceph-ed518422-90c3-5ab9-913f-91d667874e9d'})  2025-07-12 15:35:33.265166 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-66e431f6-efaf-5b66-8dd9-edbf314ce410', 'data_vg': 'ceph-66e431f6-efaf-5b66-8dd9-edbf314ce410'})  2025-07-12 15:35:33.265176 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:33.265187 | orchestrator | 2025-07-12 15:35:33.265198 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-07-12 15:35:33.265208 | orchestrator | Saturday 12 July 2025 15:35:31 +0000 (0:00:00.146) 0:00:44.794 ********* 2025-07-12 15:35:33.265273 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:35:33.265286 | orchestrator | 2025-07-12 15:35:33.265297 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-07-12 15:35:33.265308 | orchestrator | Saturday 12 July 2025 15:35:32 +0000 (0:00:00.522) 0:00:45.316 ********* 2025-07-12 15:35:33.265319 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:35:33.265329 | orchestrator | 2025-07-12 15:35:33.265340 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-07-12 15:35:33.265351 | orchestrator | Saturday 12 July 2025 15:35:32 +0000 (0:00:00.521) 0:00:45.838 ********* 2025-07-12 15:35:33.265361 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:35:33.265372 | orchestrator | 2025-07-12 15:35:33.265383 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-07-12 15:35:33.265393 | orchestrator | Saturday 12 July 2025 15:35:32 +0000 (0:00:00.137) 0:00:45.976 ********* 2025-07-12 15:35:33.265404 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-66e431f6-efaf-5b66-8dd9-edbf314ce410', 'vg_name': 'ceph-66e431f6-efaf-5b66-8dd9-edbf314ce410'}) 2025-07-12 15:35:33.265416 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-ed518422-90c3-5ab9-913f-91d667874e9d', 'vg_name': 'ceph-ed518422-90c3-5ab9-913f-91d667874e9d'}) 2025-07-12 15:35:33.265427 | orchestrator | 2025-07-12 15:35:33.265437 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-07-12 15:35:33.265448 | orchestrator | Saturday 12 July 2025 15:35:32 +0000 (0:00:00.178) 0:00:46.154 ********* 2025-07-12 15:35:33.265481 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ed518422-90c3-5ab9-913f-91d667874e9d', 'data_vg': 'ceph-ed518422-90c3-5ab9-913f-91d667874e9d'})  2025-07-12 15:35:33.265493 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-66e431f6-efaf-5b66-8dd9-edbf314ce410', 'data_vg': 'ceph-66e431f6-efaf-5b66-8dd9-edbf314ce410'})  2025-07-12 15:35:33.265504 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:33.265514 | orchestrator | 2025-07-12 15:35:33.265525 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-07-12 15:35:33.265542 | orchestrator | Saturday 12 July 2025 15:35:33 +0000 (0:00:00.160) 0:00:46.314 ********* 2025-07-12 15:35:33.265553 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ed518422-90c3-5ab9-913f-91d667874e9d', 'data_vg': 'ceph-ed518422-90c3-5ab9-913f-91d667874e9d'})  2025-07-12 15:35:33.265564 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-66e431f6-efaf-5b66-8dd9-edbf314ce410', 'data_vg': 'ceph-66e431f6-efaf-5b66-8dd9-edbf314ce410'})  2025-07-12 15:35:33.265582 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:39.181491 | orchestrator | 2025-07-12 15:35:39.181591 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-07-12 15:35:39.181606 | orchestrator | Saturday 12 July 2025 15:35:33 +0000 (0:00:00.152) 0:00:46.466 ********* 2025-07-12 15:35:39.181617 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ed518422-90c3-5ab9-913f-91d667874e9d', 'data_vg': 'ceph-ed518422-90c3-5ab9-913f-91d667874e9d'})  2025-07-12 15:35:39.181629 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-66e431f6-efaf-5b66-8dd9-edbf314ce410', 'data_vg': 'ceph-66e431f6-efaf-5b66-8dd9-edbf314ce410'})  2025-07-12 15:35:39.181639 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:35:39.181650 | orchestrator | 2025-07-12 15:35:39.181660 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-07-12 15:35:39.181670 | orchestrator | Saturday 12 July 2025 15:35:33 +0000 (0:00:00.152) 0:00:46.619 ********* 2025-07-12 15:35:39.181679 | orchestrator | ok: [testbed-node-4] => { 2025-07-12 15:35:39.181689 | orchestrator |  "lvm_report": { 2025-07-12 15:35:39.181699 | orchestrator |  "lv": [ 2025-07-12 15:35:39.181708 | orchestrator |  { 2025-07-12 15:35:39.181718 | orchestrator |  "lv_name": "osd-block-66e431f6-efaf-5b66-8dd9-edbf314ce410", 2025-07-12 15:35:39.181728 | orchestrator |  "vg_name": "ceph-66e431f6-efaf-5b66-8dd9-edbf314ce410" 2025-07-12 15:35:39.181763 | orchestrator |  }, 2025-07-12 15:35:39.181774 | orchestrator |  { 2025-07-12 15:35:39.181783 | orchestrator |  "lv_name": "osd-block-ed518422-90c3-5ab9-913f-91d667874e9d", 2025-07-12 15:35:39.181793 | orchestrator |  "vg_name": "ceph-ed518422-90c3-5ab9-913f-91d667874e9d" 2025-07-12 15:35:39.181802 | orchestrator |  } 2025-07-12 15:35:39.181811 | orchestrator |  ], 2025-07-12 15:35:39.181824 | orchestrator |  "pv": [ 2025-07-12 15:35:39.181834 | orchestrator |  { 2025-07-12 15:35:39.181843 | orchestrator |  "pv_name": "/dev/sdb", 2025-07-12 15:35:39.181853 | orchestrator |  "vg_name": "ceph-ed518422-90c3-5ab9-913f-91d667874e9d" 2025-07-12 15:35:39.181863 | orchestrator |  }, 2025-07-12 15:35:39.181872 | orchestrator |  { 2025-07-12 15:35:39.181881 | orchestrator |  "pv_name": "/dev/sdc", 2025-07-12 15:35:39.181891 | orchestrator |  "vg_name": "ceph-66e431f6-efaf-5b66-8dd9-edbf314ce410" 2025-07-12 15:35:39.181900 | orchestrator |  } 2025-07-12 15:35:39.181909 | orchestrator |  ] 2025-07-12 15:35:39.181919 | orchestrator |  } 2025-07-12 15:35:39.181928 | orchestrator | } 2025-07-12 15:35:39.181938 | orchestrator | 2025-07-12 15:35:39.181948 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-07-12 15:35:39.181957 | orchestrator | 2025-07-12 15:35:39.181966 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-12 15:35:39.181976 | orchestrator | Saturday 12 July 2025 15:35:33 +0000 (0:00:00.467) 0:00:47.086 ********* 2025-07-12 15:35:39.181985 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-07-12 15:35:39.181995 | orchestrator | 2025-07-12 15:35:39.182004 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-12 15:35:39.182014 | orchestrator | Saturday 12 July 2025 15:35:34 +0000 (0:00:00.262) 0:00:47.349 ********* 2025-07-12 15:35:39.182082 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:35:39.182091 | orchestrator | 2025-07-12 15:35:39.182101 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:35:39.182110 | orchestrator | Saturday 12 July 2025 15:35:34 +0000 (0:00:00.235) 0:00:47.585 ********* 2025-07-12 15:35:39.182120 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-07-12 15:35:39.182138 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-07-12 15:35:39.182148 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-07-12 15:35:39.182157 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-07-12 15:35:39.182166 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-07-12 15:35:39.182176 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-07-12 15:35:39.182185 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-07-12 15:35:39.182195 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-07-12 15:35:39.182204 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-07-12 15:35:39.182213 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-07-12 15:35:39.182223 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-07-12 15:35:39.182232 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-07-12 15:35:39.182241 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-07-12 15:35:39.182251 | orchestrator | 2025-07-12 15:35:39.182260 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:35:39.182270 | orchestrator | Saturday 12 July 2025 15:35:34 +0000 (0:00:00.398) 0:00:47.983 ********* 2025-07-12 15:35:39.182288 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:39.182297 | orchestrator | 2025-07-12 15:35:39.182307 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:35:39.182316 | orchestrator | Saturday 12 July 2025 15:35:34 +0000 (0:00:00.191) 0:00:48.175 ********* 2025-07-12 15:35:39.182325 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:39.182335 | orchestrator | 2025-07-12 15:35:39.182344 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:35:39.182372 | orchestrator | Saturday 12 July 2025 15:35:35 +0000 (0:00:00.192) 0:00:48.367 ********* 2025-07-12 15:35:39.182383 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:39.182392 | orchestrator | 2025-07-12 15:35:39.182402 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:35:39.182411 | orchestrator | Saturday 12 July 2025 15:35:35 +0000 (0:00:00.195) 0:00:48.562 ********* 2025-07-12 15:35:39.182421 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:39.182430 | orchestrator | 2025-07-12 15:35:39.182440 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:35:39.182470 | orchestrator | Saturday 12 July 2025 15:35:35 +0000 (0:00:00.191) 0:00:48.754 ********* 2025-07-12 15:35:39.182480 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:39.182490 | orchestrator | 2025-07-12 15:35:39.182499 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:35:39.182508 | orchestrator | Saturday 12 July 2025 15:35:35 +0000 (0:00:00.195) 0:00:48.949 ********* 2025-07-12 15:35:39.182518 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:39.182527 | orchestrator | 2025-07-12 15:35:39.182581 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:35:39.182592 | orchestrator | Saturday 12 July 2025 15:35:36 +0000 (0:00:00.577) 0:00:49.527 ********* 2025-07-12 15:35:39.182601 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:39.182611 | orchestrator | 2025-07-12 15:35:39.182620 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:35:39.182630 | orchestrator | Saturday 12 July 2025 15:35:36 +0000 (0:00:00.198) 0:00:49.726 ********* 2025-07-12 15:35:39.182639 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:39.182648 | orchestrator | 2025-07-12 15:35:39.182658 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:35:39.182667 | orchestrator | Saturday 12 July 2025 15:35:36 +0000 (0:00:00.198) 0:00:49.925 ********* 2025-07-12 15:35:39.182677 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e) 2025-07-12 15:35:39.182687 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e) 2025-07-12 15:35:39.182697 | orchestrator | 2025-07-12 15:35:39.182706 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:35:39.182715 | orchestrator | Saturday 12 July 2025 15:35:37 +0000 (0:00:00.410) 0:00:50.336 ********* 2025-07-12 15:35:39.182724 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6698acfe-c205-405d-be66-12c19a56960d) 2025-07-12 15:35:39.182734 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6698acfe-c205-405d-be66-12c19a56960d) 2025-07-12 15:35:39.182743 | orchestrator | 2025-07-12 15:35:39.182753 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:35:39.182762 | orchestrator | Saturday 12 July 2025 15:35:37 +0000 (0:00:00.421) 0:00:50.757 ********* 2025-07-12 15:35:39.182771 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2d047699-b504-4740-af1d-648b929835be) 2025-07-12 15:35:39.182781 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2d047699-b504-4740-af1d-648b929835be) 2025-07-12 15:35:39.182790 | orchestrator | 2025-07-12 15:35:39.182800 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:35:39.182809 | orchestrator | Saturday 12 July 2025 15:35:37 +0000 (0:00:00.404) 0:00:51.161 ********* 2025-07-12 15:35:39.182826 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e2bb8cb1-296e-41d9-9659-79f1ba9bca2a) 2025-07-12 15:35:39.182835 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e2bb8cb1-296e-41d9-9659-79f1ba9bca2a) 2025-07-12 15:35:39.182845 | orchestrator | 2025-07-12 15:35:39.182854 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 15:35:39.182863 | orchestrator | Saturday 12 July 2025 15:35:38 +0000 (0:00:00.485) 0:00:51.647 ********* 2025-07-12 15:35:39.182873 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-12 15:35:39.182882 | orchestrator | 2025-07-12 15:35:39.182891 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:35:39.182901 | orchestrator | Saturday 12 July 2025 15:35:38 +0000 (0:00:00.322) 0:00:51.969 ********* 2025-07-12 15:35:39.182910 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-07-12 15:35:39.182919 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-07-12 15:35:39.182928 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-07-12 15:35:39.182937 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-07-12 15:35:39.182947 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-07-12 15:35:39.182956 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-07-12 15:35:39.182965 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-07-12 15:35:39.182979 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-07-12 15:35:39.182989 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-07-12 15:35:39.182998 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-07-12 15:35:39.183007 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-07-12 15:35:39.183025 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-07-12 15:35:48.018398 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-07-12 15:35:48.018549 | orchestrator | 2025-07-12 15:35:48.018566 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:35:48.018579 | orchestrator | Saturday 12 July 2025 15:35:39 +0000 (0:00:00.406) 0:00:52.376 ********* 2025-07-12 15:35:48.018590 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:48.018602 | orchestrator | 2025-07-12 15:35:48.018614 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:35:48.018625 | orchestrator | Saturday 12 July 2025 15:35:39 +0000 (0:00:00.188) 0:00:52.565 ********* 2025-07-12 15:35:48.018636 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:48.018647 | orchestrator | 2025-07-12 15:35:48.018657 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:35:48.018668 | orchestrator | Saturday 12 July 2025 15:35:39 +0000 (0:00:00.209) 0:00:52.774 ********* 2025-07-12 15:35:48.018678 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:48.018689 | orchestrator | 2025-07-12 15:35:48.018699 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:35:48.018710 | orchestrator | Saturday 12 July 2025 15:35:40 +0000 (0:00:00.593) 0:00:53.368 ********* 2025-07-12 15:35:48.018721 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:48.018731 | orchestrator | 2025-07-12 15:35:48.018742 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:35:48.018752 | orchestrator | Saturday 12 July 2025 15:35:40 +0000 (0:00:00.206) 0:00:53.575 ********* 2025-07-12 15:35:48.018791 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:48.018802 | orchestrator | 2025-07-12 15:35:48.018813 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:35:48.018823 | orchestrator | Saturday 12 July 2025 15:35:40 +0000 (0:00:00.220) 0:00:53.795 ********* 2025-07-12 15:35:48.018834 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:48.018844 | orchestrator | 2025-07-12 15:35:48.018855 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:35:48.018866 | orchestrator | Saturday 12 July 2025 15:35:40 +0000 (0:00:00.222) 0:00:54.018 ********* 2025-07-12 15:35:48.018876 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:48.018887 | orchestrator | 2025-07-12 15:35:48.018897 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:35:48.018908 | orchestrator | Saturday 12 July 2025 15:35:41 +0000 (0:00:00.207) 0:00:54.226 ********* 2025-07-12 15:35:48.018918 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:48.018931 | orchestrator | 2025-07-12 15:35:48.018944 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:35:48.018956 | orchestrator | Saturday 12 July 2025 15:35:41 +0000 (0:00:00.186) 0:00:54.412 ********* 2025-07-12 15:35:48.018969 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-07-12 15:35:48.018982 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-07-12 15:35:48.018995 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-07-12 15:35:48.019007 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-07-12 15:35:48.019020 | orchestrator | 2025-07-12 15:35:48.019032 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:35:48.019045 | orchestrator | Saturday 12 July 2025 15:35:41 +0000 (0:00:00.637) 0:00:55.050 ********* 2025-07-12 15:35:48.019057 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:48.019070 | orchestrator | 2025-07-12 15:35:48.019083 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:35:48.019095 | orchestrator | Saturday 12 July 2025 15:35:42 +0000 (0:00:00.197) 0:00:55.247 ********* 2025-07-12 15:35:48.019107 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:48.019119 | orchestrator | 2025-07-12 15:35:48.019132 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:35:48.019144 | orchestrator | Saturday 12 July 2025 15:35:42 +0000 (0:00:00.186) 0:00:55.434 ********* 2025-07-12 15:35:48.019157 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:48.019170 | orchestrator | 2025-07-12 15:35:48.019182 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 15:35:48.019195 | orchestrator | Saturday 12 July 2025 15:35:42 +0000 (0:00:00.187) 0:00:55.622 ********* 2025-07-12 15:35:48.019205 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:48.019216 | orchestrator | 2025-07-12 15:35:48.019227 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-07-12 15:35:48.019237 | orchestrator | Saturday 12 July 2025 15:35:42 +0000 (0:00:00.191) 0:00:55.813 ********* 2025-07-12 15:35:48.019248 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:48.019258 | orchestrator | 2025-07-12 15:35:48.019269 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-07-12 15:35:48.019279 | orchestrator | Saturday 12 July 2025 15:35:42 +0000 (0:00:00.328) 0:00:56.142 ********* 2025-07-12 15:35:48.019290 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '98eaa118-ceae-5fd7-911b-5a5c065fb5e7'}}) 2025-07-12 15:35:48.019301 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd3106c13-92fd-5dcd-ba4d-74ce9f77b023'}}) 2025-07-12 15:35:48.019312 | orchestrator | 2025-07-12 15:35:48.019323 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-07-12 15:35:48.019333 | orchestrator | Saturday 12 July 2025 15:35:43 +0000 (0:00:00.187) 0:00:56.329 ********* 2025-07-12 15:35:48.019359 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-98eaa118-ceae-5fd7-911b-5a5c065fb5e7', 'data_vg': 'ceph-98eaa118-ceae-5fd7-911b-5a5c065fb5e7'}) 2025-07-12 15:35:48.019381 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d3106c13-92fd-5dcd-ba4d-74ce9f77b023', 'data_vg': 'ceph-d3106c13-92fd-5dcd-ba4d-74ce9f77b023'}) 2025-07-12 15:35:48.019392 | orchestrator | 2025-07-12 15:35:48.019403 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-07-12 15:35:48.019433 | orchestrator | Saturday 12 July 2025 15:35:45 +0000 (0:00:01.916) 0:00:58.245 ********* 2025-07-12 15:35:48.019476 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98eaa118-ceae-5fd7-911b-5a5c065fb5e7', 'data_vg': 'ceph-98eaa118-ceae-5fd7-911b-5a5c065fb5e7'})  2025-07-12 15:35:48.019489 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d3106c13-92fd-5dcd-ba4d-74ce9f77b023', 'data_vg': 'ceph-d3106c13-92fd-5dcd-ba4d-74ce9f77b023'})  2025-07-12 15:35:48.019500 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:48.019510 | orchestrator | 2025-07-12 15:35:48.019521 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-07-12 15:35:48.019532 | orchestrator | Saturday 12 July 2025 15:35:45 +0000 (0:00:00.149) 0:00:58.395 ********* 2025-07-12 15:35:48.019543 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-98eaa118-ceae-5fd7-911b-5a5c065fb5e7', 'data_vg': 'ceph-98eaa118-ceae-5fd7-911b-5a5c065fb5e7'}) 2025-07-12 15:35:48.019553 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d3106c13-92fd-5dcd-ba4d-74ce9f77b023', 'data_vg': 'ceph-d3106c13-92fd-5dcd-ba4d-74ce9f77b023'}) 2025-07-12 15:35:48.019564 | orchestrator | 2025-07-12 15:35:48.019575 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-07-12 15:35:48.019585 | orchestrator | Saturday 12 July 2025 15:35:46 +0000 (0:00:01.349) 0:00:59.744 ********* 2025-07-12 15:35:48.019596 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98eaa118-ceae-5fd7-911b-5a5c065fb5e7', 'data_vg': 'ceph-98eaa118-ceae-5fd7-911b-5a5c065fb5e7'})  2025-07-12 15:35:48.019607 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d3106c13-92fd-5dcd-ba4d-74ce9f77b023', 'data_vg': 'ceph-d3106c13-92fd-5dcd-ba4d-74ce9f77b023'})  2025-07-12 15:35:48.019618 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:48.019628 | orchestrator | 2025-07-12 15:35:48.019639 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-07-12 15:35:48.019649 | orchestrator | Saturday 12 July 2025 15:35:46 +0000 (0:00:00.159) 0:00:59.904 ********* 2025-07-12 15:35:48.019660 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:48.019671 | orchestrator | 2025-07-12 15:35:48.019681 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-07-12 15:35:48.019692 | orchestrator | Saturday 12 July 2025 15:35:46 +0000 (0:00:00.144) 0:01:00.049 ********* 2025-07-12 15:35:48.019703 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98eaa118-ceae-5fd7-911b-5a5c065fb5e7', 'data_vg': 'ceph-98eaa118-ceae-5fd7-911b-5a5c065fb5e7'})  2025-07-12 15:35:48.019714 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d3106c13-92fd-5dcd-ba4d-74ce9f77b023', 'data_vg': 'ceph-d3106c13-92fd-5dcd-ba4d-74ce9f77b023'})  2025-07-12 15:35:48.019725 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:48.019735 | orchestrator | 2025-07-12 15:35:48.019746 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-07-12 15:35:48.019756 | orchestrator | Saturday 12 July 2025 15:35:46 +0000 (0:00:00.154) 0:01:00.203 ********* 2025-07-12 15:35:48.019767 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:48.019778 | orchestrator | 2025-07-12 15:35:48.019788 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-07-12 15:35:48.019799 | orchestrator | Saturday 12 July 2025 15:35:47 +0000 (0:00:00.126) 0:01:00.330 ********* 2025-07-12 15:35:48.019809 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98eaa118-ceae-5fd7-911b-5a5c065fb5e7', 'data_vg': 'ceph-98eaa118-ceae-5fd7-911b-5a5c065fb5e7'})  2025-07-12 15:35:48.019820 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d3106c13-92fd-5dcd-ba4d-74ce9f77b023', 'data_vg': 'ceph-d3106c13-92fd-5dcd-ba4d-74ce9f77b023'})  2025-07-12 15:35:48.019843 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:48.019853 | orchestrator | 2025-07-12 15:35:48.019864 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-07-12 15:35:48.019874 | orchestrator | Saturday 12 July 2025 15:35:47 +0000 (0:00:00.154) 0:01:00.484 ********* 2025-07-12 15:35:48.019885 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:48.019895 | orchestrator | 2025-07-12 15:35:48.019906 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-07-12 15:35:48.019916 | orchestrator | Saturday 12 July 2025 15:35:47 +0000 (0:00:00.123) 0:01:00.608 ********* 2025-07-12 15:35:48.019927 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98eaa118-ceae-5fd7-911b-5a5c065fb5e7', 'data_vg': 'ceph-98eaa118-ceae-5fd7-911b-5a5c065fb5e7'})  2025-07-12 15:35:48.019938 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d3106c13-92fd-5dcd-ba4d-74ce9f77b023', 'data_vg': 'ceph-d3106c13-92fd-5dcd-ba4d-74ce9f77b023'})  2025-07-12 15:35:48.019948 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:48.019959 | orchestrator | 2025-07-12 15:35:48.019969 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-07-12 15:35:48.019985 | orchestrator | Saturday 12 July 2025 15:35:47 +0000 (0:00:00.144) 0:01:00.752 ********* 2025-07-12 15:35:48.019996 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:35:48.020007 | orchestrator | 2025-07-12 15:35:48.020017 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-07-12 15:35:48.020028 | orchestrator | Saturday 12 July 2025 15:35:47 +0000 (0:00:00.132) 0:01:00.884 ********* 2025-07-12 15:35:48.020046 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98eaa118-ceae-5fd7-911b-5a5c065fb5e7', 'data_vg': 'ceph-98eaa118-ceae-5fd7-911b-5a5c065fb5e7'})  2025-07-12 15:35:54.001891 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d3106c13-92fd-5dcd-ba4d-74ce9f77b023', 'data_vg': 'ceph-d3106c13-92fd-5dcd-ba4d-74ce9f77b023'})  2025-07-12 15:35:54.002005 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:54.002085 | orchestrator | 2025-07-12 15:35:54.002100 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-07-12 15:35:54.002112 | orchestrator | Saturday 12 July 2025 15:35:48 +0000 (0:00:00.336) 0:01:01.221 ********* 2025-07-12 15:35:54.002124 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98eaa118-ceae-5fd7-911b-5a5c065fb5e7', 'data_vg': 'ceph-98eaa118-ceae-5fd7-911b-5a5c065fb5e7'})  2025-07-12 15:35:54.002136 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d3106c13-92fd-5dcd-ba4d-74ce9f77b023', 'data_vg': 'ceph-d3106c13-92fd-5dcd-ba4d-74ce9f77b023'})  2025-07-12 15:35:54.002147 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:54.002157 | orchestrator | 2025-07-12 15:35:54.002169 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-07-12 15:35:54.002179 | orchestrator | Saturday 12 July 2025 15:35:48 +0000 (0:00:00.155) 0:01:01.377 ********* 2025-07-12 15:35:54.002191 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98eaa118-ceae-5fd7-911b-5a5c065fb5e7', 'data_vg': 'ceph-98eaa118-ceae-5fd7-911b-5a5c065fb5e7'})  2025-07-12 15:35:54.002201 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d3106c13-92fd-5dcd-ba4d-74ce9f77b023', 'data_vg': 'ceph-d3106c13-92fd-5dcd-ba4d-74ce9f77b023'})  2025-07-12 15:35:54.002212 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:54.002223 | orchestrator | 2025-07-12 15:35:54.002234 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-07-12 15:35:54.002244 | orchestrator | Saturday 12 July 2025 15:35:48 +0000 (0:00:00.154) 0:01:01.531 ********* 2025-07-12 15:35:54.002255 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:54.002266 | orchestrator | 2025-07-12 15:35:54.002276 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-07-12 15:35:54.002310 | orchestrator | Saturday 12 July 2025 15:35:48 +0000 (0:00:00.129) 0:01:01.661 ********* 2025-07-12 15:35:54.002321 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:54.002331 | orchestrator | 2025-07-12 15:35:54.002342 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-07-12 15:35:54.002353 | orchestrator | Saturday 12 July 2025 15:35:48 +0000 (0:00:00.126) 0:01:01.787 ********* 2025-07-12 15:35:54.002363 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:54.002374 | orchestrator | 2025-07-12 15:35:54.002384 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-07-12 15:35:54.002395 | orchestrator | Saturday 12 July 2025 15:35:48 +0000 (0:00:00.138) 0:01:01.925 ********* 2025-07-12 15:35:54.002406 | orchestrator | ok: [testbed-node-5] => { 2025-07-12 15:35:54.002419 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-07-12 15:35:54.002433 | orchestrator | } 2025-07-12 15:35:54.002479 | orchestrator | 2025-07-12 15:35:54.002492 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-07-12 15:35:54.002505 | orchestrator | Saturday 12 July 2025 15:35:48 +0000 (0:00:00.137) 0:01:02.063 ********* 2025-07-12 15:35:54.002518 | orchestrator | ok: [testbed-node-5] => { 2025-07-12 15:35:54.002531 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-07-12 15:35:54.002543 | orchestrator | } 2025-07-12 15:35:54.002555 | orchestrator | 2025-07-12 15:35:54.002567 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-07-12 15:35:54.002580 | orchestrator | Saturday 12 July 2025 15:35:48 +0000 (0:00:00.138) 0:01:02.201 ********* 2025-07-12 15:35:54.002592 | orchestrator | ok: [testbed-node-5] => { 2025-07-12 15:35:54.002604 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-07-12 15:35:54.002616 | orchestrator | } 2025-07-12 15:35:54.002628 | orchestrator | 2025-07-12 15:35:54.002640 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-07-12 15:35:54.002652 | orchestrator | Saturday 12 July 2025 15:35:49 +0000 (0:00:00.144) 0:01:02.346 ********* 2025-07-12 15:35:54.002665 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:35:54.002678 | orchestrator | 2025-07-12 15:35:54.002690 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-07-12 15:35:54.002703 | orchestrator | Saturday 12 July 2025 15:35:49 +0000 (0:00:00.507) 0:01:02.853 ********* 2025-07-12 15:35:54.002714 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:35:54.002725 | orchestrator | 2025-07-12 15:35:54.002735 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-07-12 15:35:54.002746 | orchestrator | Saturday 12 July 2025 15:35:50 +0000 (0:00:00.522) 0:01:03.376 ********* 2025-07-12 15:35:54.002756 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:35:54.002767 | orchestrator | 2025-07-12 15:35:54.002778 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-07-12 15:35:54.002788 | orchestrator | Saturday 12 July 2025 15:35:50 +0000 (0:00:00.514) 0:01:03.890 ********* 2025-07-12 15:35:54.002799 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:35:54.002809 | orchestrator | 2025-07-12 15:35:54.002820 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-07-12 15:35:54.002830 | orchestrator | Saturday 12 July 2025 15:35:51 +0000 (0:00:00.329) 0:01:04.220 ********* 2025-07-12 15:35:54.002841 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:54.002852 | orchestrator | 2025-07-12 15:35:54.002862 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-07-12 15:35:54.002873 | orchestrator | Saturday 12 July 2025 15:35:51 +0000 (0:00:00.124) 0:01:04.345 ********* 2025-07-12 15:35:54.002884 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:54.002894 | orchestrator | 2025-07-12 15:35:54.002905 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-07-12 15:35:54.002916 | orchestrator | Saturday 12 July 2025 15:35:51 +0000 (0:00:00.125) 0:01:04.470 ********* 2025-07-12 15:35:54.002926 | orchestrator | ok: [testbed-node-5] => { 2025-07-12 15:35:54.002937 | orchestrator |  "vgs_report": { 2025-07-12 15:35:54.002959 | orchestrator |  "vg": [] 2025-07-12 15:35:54.002991 | orchestrator |  } 2025-07-12 15:35:54.003003 | orchestrator | } 2025-07-12 15:35:54.003014 | orchestrator | 2025-07-12 15:35:54.003025 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-07-12 15:35:54.003035 | orchestrator | Saturday 12 July 2025 15:35:51 +0000 (0:00:00.140) 0:01:04.611 ********* 2025-07-12 15:35:54.003046 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:54.003057 | orchestrator | 2025-07-12 15:35:54.003067 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-07-12 15:35:54.003079 | orchestrator | Saturday 12 July 2025 15:35:51 +0000 (0:00:00.138) 0:01:04.750 ********* 2025-07-12 15:35:54.003089 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:54.003100 | orchestrator | 2025-07-12 15:35:54.003127 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-07-12 15:35:54.003138 | orchestrator | Saturday 12 July 2025 15:35:51 +0000 (0:00:00.130) 0:01:04.881 ********* 2025-07-12 15:35:54.003149 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:54.003159 | orchestrator | 2025-07-12 15:35:54.003170 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-07-12 15:35:54.003181 | orchestrator | Saturday 12 July 2025 15:35:51 +0000 (0:00:00.131) 0:01:05.013 ********* 2025-07-12 15:35:54.003191 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:54.003202 | orchestrator | 2025-07-12 15:35:54.003212 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-07-12 15:35:54.003223 | orchestrator | Saturday 12 July 2025 15:35:51 +0000 (0:00:00.133) 0:01:05.146 ********* 2025-07-12 15:35:54.003234 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:54.003244 | orchestrator | 2025-07-12 15:35:54.003258 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-07-12 15:35:54.003275 | orchestrator | Saturday 12 July 2025 15:35:52 +0000 (0:00:00.126) 0:01:05.272 ********* 2025-07-12 15:35:54.003293 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:54.003311 | orchestrator | 2025-07-12 15:35:54.003330 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-07-12 15:35:54.003347 | orchestrator | Saturday 12 July 2025 15:35:52 +0000 (0:00:00.131) 0:01:05.404 ********* 2025-07-12 15:35:54.003365 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:54.003377 | orchestrator | 2025-07-12 15:35:54.003388 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-07-12 15:35:54.003398 | orchestrator | Saturday 12 July 2025 15:35:52 +0000 (0:00:00.130) 0:01:05.534 ********* 2025-07-12 15:35:54.003409 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:54.003420 | orchestrator | 2025-07-12 15:35:54.003430 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-07-12 15:35:54.003461 | orchestrator | Saturday 12 July 2025 15:35:52 +0000 (0:00:00.141) 0:01:05.676 ********* 2025-07-12 15:35:54.003472 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:54.003483 | orchestrator | 2025-07-12 15:35:54.003494 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-07-12 15:35:54.003504 | orchestrator | Saturday 12 July 2025 15:35:52 +0000 (0:00:00.349) 0:01:06.025 ********* 2025-07-12 15:35:54.003515 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:54.003525 | orchestrator | 2025-07-12 15:35:54.003536 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-07-12 15:35:54.003547 | orchestrator | Saturday 12 July 2025 15:35:52 +0000 (0:00:00.148) 0:01:06.174 ********* 2025-07-12 15:35:54.003557 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:54.003568 | orchestrator | 2025-07-12 15:35:54.003578 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-07-12 15:35:54.003589 | orchestrator | Saturday 12 July 2025 15:35:53 +0000 (0:00:00.138) 0:01:06.312 ********* 2025-07-12 15:35:54.003600 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:54.003610 | orchestrator | 2025-07-12 15:35:54.003621 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-07-12 15:35:54.003631 | orchestrator | Saturday 12 July 2025 15:35:53 +0000 (0:00:00.141) 0:01:06.454 ********* 2025-07-12 15:35:54.003652 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:54.003663 | orchestrator | 2025-07-12 15:35:54.003673 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-07-12 15:35:54.003684 | orchestrator | Saturday 12 July 2025 15:35:53 +0000 (0:00:00.143) 0:01:06.597 ********* 2025-07-12 15:35:54.003695 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:54.003705 | orchestrator | 2025-07-12 15:35:54.003716 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-07-12 15:35:54.003726 | orchestrator | Saturday 12 July 2025 15:35:53 +0000 (0:00:00.140) 0:01:06.737 ********* 2025-07-12 15:35:54.003737 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98eaa118-ceae-5fd7-911b-5a5c065fb5e7', 'data_vg': 'ceph-98eaa118-ceae-5fd7-911b-5a5c065fb5e7'})  2025-07-12 15:35:54.003748 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d3106c13-92fd-5dcd-ba4d-74ce9f77b023', 'data_vg': 'ceph-d3106c13-92fd-5dcd-ba4d-74ce9f77b023'})  2025-07-12 15:35:54.003759 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:54.003769 | orchestrator | 2025-07-12 15:35:54.003780 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-07-12 15:35:54.003790 | orchestrator | Saturday 12 July 2025 15:35:53 +0000 (0:00:00.157) 0:01:06.895 ********* 2025-07-12 15:35:54.003806 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98eaa118-ceae-5fd7-911b-5a5c065fb5e7', 'data_vg': 'ceph-98eaa118-ceae-5fd7-911b-5a5c065fb5e7'})  2025-07-12 15:35:54.003818 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d3106c13-92fd-5dcd-ba4d-74ce9f77b023', 'data_vg': 'ceph-d3106c13-92fd-5dcd-ba4d-74ce9f77b023'})  2025-07-12 15:35:54.003828 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:54.003839 | orchestrator | 2025-07-12 15:35:54.003850 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-07-12 15:35:54.003861 | orchestrator | Saturday 12 July 2025 15:35:53 +0000 (0:00:00.159) 0:01:07.054 ********* 2025-07-12 15:35:54.003881 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98eaa118-ceae-5fd7-911b-5a5c065fb5e7', 'data_vg': 'ceph-98eaa118-ceae-5fd7-911b-5a5c065fb5e7'})  2025-07-12 15:35:57.005271 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d3106c13-92fd-5dcd-ba4d-74ce9f77b023', 'data_vg': 'ceph-d3106c13-92fd-5dcd-ba4d-74ce9f77b023'})  2025-07-12 15:35:57.005504 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:57.005555 | orchestrator | 2025-07-12 15:35:57.005577 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-07-12 15:35:57.005596 | orchestrator | Saturday 12 July 2025 15:35:53 +0000 (0:00:00.150) 0:01:07.205 ********* 2025-07-12 15:35:57.005613 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98eaa118-ceae-5fd7-911b-5a5c065fb5e7', 'data_vg': 'ceph-98eaa118-ceae-5fd7-911b-5a5c065fb5e7'})  2025-07-12 15:35:57.005631 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d3106c13-92fd-5dcd-ba4d-74ce9f77b023', 'data_vg': 'ceph-d3106c13-92fd-5dcd-ba4d-74ce9f77b023'})  2025-07-12 15:35:57.005648 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:57.005664 | orchestrator | 2025-07-12 15:35:57.005682 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-07-12 15:35:57.005700 | orchestrator | Saturday 12 July 2025 15:35:54 +0000 (0:00:00.153) 0:01:07.359 ********* 2025-07-12 15:35:57.005718 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98eaa118-ceae-5fd7-911b-5a5c065fb5e7', 'data_vg': 'ceph-98eaa118-ceae-5fd7-911b-5a5c065fb5e7'})  2025-07-12 15:35:57.005732 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d3106c13-92fd-5dcd-ba4d-74ce9f77b023', 'data_vg': 'ceph-d3106c13-92fd-5dcd-ba4d-74ce9f77b023'})  2025-07-12 15:35:57.005743 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:57.005753 | orchestrator | 2025-07-12 15:35:57.005767 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-07-12 15:35:57.005805 | orchestrator | Saturday 12 July 2025 15:35:54 +0000 (0:00:00.152) 0:01:07.512 ********* 2025-07-12 15:35:57.005818 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98eaa118-ceae-5fd7-911b-5a5c065fb5e7', 'data_vg': 'ceph-98eaa118-ceae-5fd7-911b-5a5c065fb5e7'})  2025-07-12 15:35:57.005832 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d3106c13-92fd-5dcd-ba4d-74ce9f77b023', 'data_vg': 'ceph-d3106c13-92fd-5dcd-ba4d-74ce9f77b023'})  2025-07-12 15:35:57.005845 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:57.005858 | orchestrator | 2025-07-12 15:35:57.005870 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-07-12 15:35:57.005883 | orchestrator | Saturday 12 July 2025 15:35:54 +0000 (0:00:00.153) 0:01:07.665 ********* 2025-07-12 15:35:57.005896 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98eaa118-ceae-5fd7-911b-5a5c065fb5e7', 'data_vg': 'ceph-98eaa118-ceae-5fd7-911b-5a5c065fb5e7'})  2025-07-12 15:35:57.005909 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d3106c13-92fd-5dcd-ba4d-74ce9f77b023', 'data_vg': 'ceph-d3106c13-92fd-5dcd-ba4d-74ce9f77b023'})  2025-07-12 15:35:57.005921 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:57.005933 | orchestrator | 2025-07-12 15:35:57.005945 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-07-12 15:35:57.005957 | orchestrator | Saturday 12 July 2025 15:35:54 +0000 (0:00:00.379) 0:01:08.045 ********* 2025-07-12 15:35:57.005970 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98eaa118-ceae-5fd7-911b-5a5c065fb5e7', 'data_vg': 'ceph-98eaa118-ceae-5fd7-911b-5a5c065fb5e7'})  2025-07-12 15:35:57.005983 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d3106c13-92fd-5dcd-ba4d-74ce9f77b023', 'data_vg': 'ceph-d3106c13-92fd-5dcd-ba4d-74ce9f77b023'})  2025-07-12 15:35:57.005995 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:57.006007 | orchestrator | 2025-07-12 15:35:57.006085 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-07-12 15:35:57.006102 | orchestrator | Saturday 12 July 2025 15:35:55 +0000 (0:00:00.168) 0:01:08.214 ********* 2025-07-12 15:35:57.006114 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:35:57.006126 | orchestrator | 2025-07-12 15:35:57.006137 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-07-12 15:35:57.006148 | orchestrator | Saturday 12 July 2025 15:35:55 +0000 (0:00:00.522) 0:01:08.736 ********* 2025-07-12 15:35:57.006158 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:35:57.006169 | orchestrator | 2025-07-12 15:35:57.006180 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-07-12 15:35:57.006190 | orchestrator | Saturday 12 July 2025 15:35:56 +0000 (0:00:00.525) 0:01:09.262 ********* 2025-07-12 15:35:57.006201 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:35:57.006212 | orchestrator | 2025-07-12 15:35:57.006222 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-07-12 15:35:57.006249 | orchestrator | Saturday 12 July 2025 15:35:56 +0000 (0:00:00.149) 0:01:09.411 ********* 2025-07-12 15:35:57.006260 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-98eaa118-ceae-5fd7-911b-5a5c065fb5e7', 'vg_name': 'ceph-98eaa118-ceae-5fd7-911b-5a5c065fb5e7'}) 2025-07-12 15:35:57.006272 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-d3106c13-92fd-5dcd-ba4d-74ce9f77b023', 'vg_name': 'ceph-d3106c13-92fd-5dcd-ba4d-74ce9f77b023'}) 2025-07-12 15:35:57.006282 | orchestrator | 2025-07-12 15:35:57.006293 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-07-12 15:35:57.006304 | orchestrator | Saturday 12 July 2025 15:35:56 +0000 (0:00:00.165) 0:01:09.577 ********* 2025-07-12 15:35:57.006338 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98eaa118-ceae-5fd7-911b-5a5c065fb5e7', 'data_vg': 'ceph-98eaa118-ceae-5fd7-911b-5a5c065fb5e7'})  2025-07-12 15:35:57.006350 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d3106c13-92fd-5dcd-ba4d-74ce9f77b023', 'data_vg': 'ceph-d3106c13-92fd-5dcd-ba4d-74ce9f77b023'})  2025-07-12 15:35:57.006373 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:57.006384 | orchestrator | 2025-07-12 15:35:57.006395 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-07-12 15:35:57.006406 | orchestrator | Saturday 12 July 2025 15:35:56 +0000 (0:00:00.159) 0:01:09.736 ********* 2025-07-12 15:35:57.006416 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98eaa118-ceae-5fd7-911b-5a5c065fb5e7', 'data_vg': 'ceph-98eaa118-ceae-5fd7-911b-5a5c065fb5e7'})  2025-07-12 15:35:57.006427 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d3106c13-92fd-5dcd-ba4d-74ce9f77b023', 'data_vg': 'ceph-d3106c13-92fd-5dcd-ba4d-74ce9f77b023'})  2025-07-12 15:35:57.006460 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:57.006471 | orchestrator | 2025-07-12 15:35:57.006482 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-07-12 15:35:57.006493 | orchestrator | Saturday 12 July 2025 15:35:56 +0000 (0:00:00.156) 0:01:09.892 ********* 2025-07-12 15:35:57.006503 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98eaa118-ceae-5fd7-911b-5a5c065fb5e7', 'data_vg': 'ceph-98eaa118-ceae-5fd7-911b-5a5c065fb5e7'})  2025-07-12 15:35:57.006514 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d3106c13-92fd-5dcd-ba4d-74ce9f77b023', 'data_vg': 'ceph-d3106c13-92fd-5dcd-ba4d-74ce9f77b023'})  2025-07-12 15:35:57.006524 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:35:57.006535 | orchestrator | 2025-07-12 15:35:57.006545 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-07-12 15:35:57.006556 | orchestrator | Saturday 12 July 2025 15:35:56 +0000 (0:00:00.150) 0:01:10.043 ********* 2025-07-12 15:35:57.006567 | orchestrator | ok: [testbed-node-5] => { 2025-07-12 15:35:57.006578 | orchestrator |  "lvm_report": { 2025-07-12 15:35:57.006589 | orchestrator |  "lv": [ 2025-07-12 15:35:57.006600 | orchestrator |  { 2025-07-12 15:35:57.006611 | orchestrator |  "lv_name": "osd-block-98eaa118-ceae-5fd7-911b-5a5c065fb5e7", 2025-07-12 15:35:57.006623 | orchestrator |  "vg_name": "ceph-98eaa118-ceae-5fd7-911b-5a5c065fb5e7" 2025-07-12 15:35:57.006633 | orchestrator |  }, 2025-07-12 15:35:57.006644 | orchestrator |  { 2025-07-12 15:35:57.006655 | orchestrator |  "lv_name": "osd-block-d3106c13-92fd-5dcd-ba4d-74ce9f77b023", 2025-07-12 15:35:57.006665 | orchestrator |  "vg_name": "ceph-d3106c13-92fd-5dcd-ba4d-74ce9f77b023" 2025-07-12 15:35:57.006676 | orchestrator |  } 2025-07-12 15:35:57.006686 | orchestrator |  ], 2025-07-12 15:35:57.006697 | orchestrator |  "pv": [ 2025-07-12 15:35:57.006708 | orchestrator |  { 2025-07-12 15:35:57.006718 | orchestrator |  "pv_name": "/dev/sdb", 2025-07-12 15:35:57.006729 | orchestrator |  "vg_name": "ceph-98eaa118-ceae-5fd7-911b-5a5c065fb5e7" 2025-07-12 15:35:57.006739 | orchestrator |  }, 2025-07-12 15:35:57.006750 | orchestrator |  { 2025-07-12 15:35:57.006760 | orchestrator |  "pv_name": "/dev/sdc", 2025-07-12 15:35:57.006771 | orchestrator |  "vg_name": "ceph-d3106c13-92fd-5dcd-ba4d-74ce9f77b023" 2025-07-12 15:35:57.006782 | orchestrator |  } 2025-07-12 15:35:57.006792 | orchestrator |  ] 2025-07-12 15:35:57.006802 | orchestrator |  } 2025-07-12 15:35:57.006813 | orchestrator | } 2025-07-12 15:35:57.006824 | orchestrator | 2025-07-12 15:35:57.006834 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:35:57.006845 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-07-12 15:35:57.006856 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-07-12 15:35:57.006867 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-07-12 15:35:57.006886 | orchestrator | 2025-07-12 15:35:57.006897 | orchestrator | 2025-07-12 15:35:57.006907 | orchestrator | 2025-07-12 15:35:57.006918 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:35:57.006929 | orchestrator | Saturday 12 July 2025 15:35:56 +0000 (0:00:00.145) 0:01:10.189 ********* 2025-07-12 15:35:57.006940 | orchestrator | =============================================================================== 2025-07-12 15:35:57.006950 | orchestrator | Create block VGs -------------------------------------------------------- 5.91s 2025-07-12 15:35:57.006966 | orchestrator | Create block LVs -------------------------------------------------------- 4.05s 2025-07-12 15:35:57.006977 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.91s 2025-07-12 15:35:57.006987 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.58s 2025-07-12 15:35:57.006998 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.57s 2025-07-12 15:35:57.007008 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.54s 2025-07-12 15:35:57.007019 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.53s 2025-07-12 15:35:57.007029 | orchestrator | Add known partitions to the list of available block devices ------------- 1.40s 2025-07-12 15:35:57.007047 | orchestrator | Add known links to the list of available block devices ------------------ 1.18s 2025-07-12 15:35:57.364507 | orchestrator | Add known partitions to the list of available block devices ------------- 1.02s 2025-07-12 15:35:57.364617 | orchestrator | Print LVM report data --------------------------------------------------- 0.91s 2025-07-12 15:35:57.364641 | orchestrator | Add known partitions to the list of available block devices ------------- 0.81s 2025-07-12 15:35:57.364658 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.73s 2025-07-12 15:35:57.364669 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2025-07-12 15:35:57.364680 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.67s 2025-07-12 15:35:57.364690 | orchestrator | Get initial list of available block devices ----------------------------- 0.67s 2025-07-12 15:35:57.364701 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.66s 2025-07-12 15:35:57.364712 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.66s 2025-07-12 15:35:57.364722 | orchestrator | Print 'Create DB LVs for ceph_db_devices' ------------------------------- 0.65s 2025-07-12 15:35:57.364733 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.64s 2025-07-12 15:36:09.460412 | orchestrator | 2025-07-12 15:36:09 | INFO  | Task bc293321-b9e4-406a-b6c2-80d376f4e814 (facts) was prepared for execution. 2025-07-12 15:36:09.460566 | orchestrator | 2025-07-12 15:36:09 | INFO  | It takes a moment until task bc293321-b9e4-406a-b6c2-80d376f4e814 (facts) has been started and output is visible here. 2025-07-12 15:36:21.526558 | orchestrator | 2025-07-12 15:36:21.526667 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-07-12 15:36:21.526684 | orchestrator | 2025-07-12 15:36:21.526697 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-07-12 15:36:21.526708 | orchestrator | Saturday 12 July 2025 15:36:13 +0000 (0:00:00.266) 0:00:00.266 ********* 2025-07-12 15:36:21.526720 | orchestrator | ok: [testbed-manager] 2025-07-12 15:36:21.526731 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:36:21.526743 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:36:21.526753 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:36:21.526764 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:36:21.526775 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:36:21.526785 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:36:21.526796 | orchestrator | 2025-07-12 15:36:21.526807 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-07-12 15:36:21.526818 | orchestrator | Saturday 12 July 2025 15:36:14 +0000 (0:00:01.087) 0:00:01.354 ********* 2025-07-12 15:36:21.526879 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:36:21.526893 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:36:21.526904 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:36:21.526914 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:36:21.526925 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:36:21.526936 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:36:21.526947 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:36:21.526957 | orchestrator | 2025-07-12 15:36:21.526969 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-12 15:36:21.526979 | orchestrator | 2025-07-12 15:36:21.526990 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-12 15:36:21.527001 | orchestrator | Saturday 12 July 2025 15:36:15 +0000 (0:00:01.234) 0:00:02.588 ********* 2025-07-12 15:36:21.527013 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:36:21.527024 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:36:21.527034 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:36:21.527046 | orchestrator | ok: [testbed-manager] 2025-07-12 15:36:21.527059 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:36:21.527070 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:36:21.527082 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:36:21.527096 | orchestrator | 2025-07-12 15:36:21.527108 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-07-12 15:36:21.527120 | orchestrator | 2025-07-12 15:36:21.527132 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-07-12 15:36:21.527144 | orchestrator | Saturday 12 July 2025 15:36:20 +0000 (0:00:04.821) 0:00:07.409 ********* 2025-07-12 15:36:21.527157 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:36:21.527168 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:36:21.527181 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:36:21.527192 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:36:21.527204 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:36:21.527216 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:36:21.527228 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:36:21.527241 | orchestrator | 2025-07-12 15:36:21.527253 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:36:21.527265 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 15:36:21.527278 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 15:36:21.527291 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 15:36:21.527303 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 15:36:21.527315 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 15:36:21.527328 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 15:36:21.527340 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 15:36:21.527352 | orchestrator | 2025-07-12 15:36:21.527364 | orchestrator | 2025-07-12 15:36:21.527377 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:36:21.527389 | orchestrator | Saturday 12 July 2025 15:36:21 +0000 (0:00:00.516) 0:00:07.925 ********* 2025-07-12 15:36:21.527402 | orchestrator | =============================================================================== 2025-07-12 15:36:21.527413 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.82s 2025-07-12 15:36:21.527454 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.23s 2025-07-12 15:36:21.527466 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.09s 2025-07-12 15:36:21.527477 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2025-07-12 15:36:21.789982 | orchestrator | 2025-07-12 15:36:21.793418 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sat Jul 12 15:36:21 UTC 2025 2025-07-12 15:36:21.793516 | orchestrator | 2025-07-12 15:36:23.545509 | orchestrator | 2025-07-12 15:36:23 | INFO  | Collection nutshell is prepared for execution 2025-07-12 15:36:23.545610 | orchestrator | 2025-07-12 15:36:23 | INFO  | D [0] - dotfiles 2025-07-12 15:36:33.733183 | orchestrator | 2025-07-12 15:36:33 | INFO  | D [0] - homer 2025-07-12 15:36:33.733291 | orchestrator | 2025-07-12 15:36:33 | INFO  | D [0] - netdata 2025-07-12 15:36:33.733306 | orchestrator | 2025-07-12 15:36:33 | INFO  | D [0] - openstackclient 2025-07-12 15:36:33.733318 | orchestrator | 2025-07-12 15:36:33 | INFO  | D [0] - phpmyadmin 2025-07-12 15:36:33.733329 | orchestrator | 2025-07-12 15:36:33 | INFO  | A [0] - common 2025-07-12 15:36:33.733340 | orchestrator | 2025-07-12 15:36:33 | INFO  | A [1] -- loadbalancer 2025-07-12 15:36:33.733351 | orchestrator | 2025-07-12 15:36:33 | INFO  | D [2] --- opensearch 2025-07-12 15:36:33.733361 | orchestrator | 2025-07-12 15:36:33 | INFO  | A [2] --- mariadb-ng 2025-07-12 15:36:33.733372 | orchestrator | 2025-07-12 15:36:33 | INFO  | D [3] ---- horizon 2025-07-12 15:36:33.733382 | orchestrator | 2025-07-12 15:36:33 | INFO  | A [3] ---- keystone 2025-07-12 15:36:33.733393 | orchestrator | 2025-07-12 15:36:33 | INFO  | A [4] ----- neutron 2025-07-12 15:36:33.733403 | orchestrator | 2025-07-12 15:36:33 | INFO  | D [5] ------ wait-for-nova 2025-07-12 15:36:33.733474 | orchestrator | 2025-07-12 15:36:33 | INFO  | A [5] ------ octavia 2025-07-12 15:36:33.733501 | orchestrator | 2025-07-12 15:36:33 | INFO  | D [4] ----- barbican 2025-07-12 15:36:33.733513 | orchestrator | 2025-07-12 15:36:33 | INFO  | D [4] ----- designate 2025-07-12 15:36:33.733544 | orchestrator | 2025-07-12 15:36:33 | INFO  | D [4] ----- ironic 2025-07-12 15:36:33.733555 | orchestrator | 2025-07-12 15:36:33 | INFO  | D [4] ----- placement 2025-07-12 15:36:33.733566 | orchestrator | 2025-07-12 15:36:33 | INFO  | D [4] ----- magnum 2025-07-12 15:36:33.733687 | orchestrator | 2025-07-12 15:36:33 | INFO  | A [1] -- openvswitch 2025-07-12 15:36:33.733891 | orchestrator | 2025-07-12 15:36:33 | INFO  | D [2] --- ovn 2025-07-12 15:36:33.733929 | orchestrator | 2025-07-12 15:36:33 | INFO  | D [1] -- memcached 2025-07-12 15:36:33.734081 | orchestrator | 2025-07-12 15:36:33 | INFO  | D [1] -- redis 2025-07-12 15:36:33.734100 | orchestrator | 2025-07-12 15:36:33 | INFO  | D [1] -- rabbitmq-ng 2025-07-12 15:36:33.734111 | orchestrator | 2025-07-12 15:36:33 | INFO  | A [0] - kubernetes 2025-07-12 15:36:33.735894 | orchestrator | 2025-07-12 15:36:33 | INFO  | D [1] -- kubeconfig 2025-07-12 15:36:33.735939 | orchestrator | 2025-07-12 15:36:33 | INFO  | A [1] -- copy-kubeconfig 2025-07-12 15:36:33.736137 | orchestrator | 2025-07-12 15:36:33 | INFO  | A [0] - ceph 2025-07-12 15:36:33.737935 | orchestrator | 2025-07-12 15:36:33 | INFO  | A [1] -- ceph-pools 2025-07-12 15:36:33.737966 | orchestrator | 2025-07-12 15:36:33 | INFO  | A [2] --- copy-ceph-keys 2025-07-12 15:36:33.737978 | orchestrator | 2025-07-12 15:36:33 | INFO  | A [3] ---- cephclient 2025-07-12 15:36:33.738471 | orchestrator | 2025-07-12 15:36:33 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-07-12 15:36:33.738521 | orchestrator | 2025-07-12 15:36:33 | INFO  | A [4] ----- wait-for-keystone 2025-07-12 15:36:33.738546 | orchestrator | 2025-07-12 15:36:33 | INFO  | D [5] ------ kolla-ceph-rgw 2025-07-12 15:36:33.738558 | orchestrator | 2025-07-12 15:36:33 | INFO  | D [5] ------ glance 2025-07-12 15:36:33.738569 | orchestrator | 2025-07-12 15:36:33 | INFO  | D [5] ------ cinder 2025-07-12 15:36:33.738580 | orchestrator | 2025-07-12 15:36:33 | INFO  | D [5] ------ nova 2025-07-12 15:36:33.738591 | orchestrator | 2025-07-12 15:36:33 | INFO  | A [4] ----- prometheus 2025-07-12 15:36:33.738602 | orchestrator | 2025-07-12 15:36:33 | INFO  | D [5] ------ grafana 2025-07-12 15:36:33.962341 | orchestrator | 2025-07-12 15:36:33 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-07-12 15:36:33.962515 | orchestrator | 2025-07-12 15:36:33 | INFO  | Tasks are running in the background 2025-07-12 15:36:36.573967 | orchestrator | 2025-07-12 15:36:36 | INFO  | No task IDs specified, wait for all currently running tasks 2025-07-12 15:36:38.710834 | orchestrator | 2025-07-12 15:36:38 | INFO  | Task e076a688-450a-4159-9741-631d0fa6d149 is in state STARTED 2025-07-12 15:36:38.713228 | orchestrator | 2025-07-12 15:36:38 | INFO  | Task 7b9cef4f-efa5-4bdd-b1f7-e78197256e13 is in state STARTED 2025-07-12 15:36:38.713262 | orchestrator | 2025-07-12 15:36:38 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:36:38.718728 | orchestrator | 2025-07-12 15:36:38 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:36:38.719290 | orchestrator | 2025-07-12 15:36:38 | INFO  | Task 1d517d98-4f1a-4f47-9f88-00adcbab40c3 is in state STARTED 2025-07-12 15:36:38.719998 | orchestrator | 2025-07-12 15:36:38 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:36:38.720545 | orchestrator | 2025-07-12 15:36:38 | INFO  | Task 0152e28c-2f5d-4d3c-bbf8-a5f77a73f39b is in state STARTED 2025-07-12 15:36:38.720567 | orchestrator | 2025-07-12 15:36:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:36:41.778987 | orchestrator | 2025-07-12 15:36:41 | INFO  | Task e076a688-450a-4159-9741-631d0fa6d149 is in state STARTED 2025-07-12 15:36:41.779445 | orchestrator | 2025-07-12 15:36:41 | INFO  | Task 7b9cef4f-efa5-4bdd-b1f7-e78197256e13 is in state STARTED 2025-07-12 15:36:41.779908 | orchestrator | 2025-07-12 15:36:41 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:36:41.785813 | orchestrator | 2025-07-12 15:36:41 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:36:41.786128 | orchestrator | 2025-07-12 15:36:41 | INFO  | Task 1d517d98-4f1a-4f47-9f88-00adcbab40c3 is in state STARTED 2025-07-12 15:36:41.792622 | orchestrator | 2025-07-12 15:36:41 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:36:41.794561 | orchestrator | 2025-07-12 15:36:41 | INFO  | Task 0152e28c-2f5d-4d3c-bbf8-a5f77a73f39b is in state STARTED 2025-07-12 15:36:41.794585 | orchestrator | 2025-07-12 15:36:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:36:44.831105 | orchestrator | 2025-07-12 15:36:44 | INFO  | Task e076a688-450a-4159-9741-631d0fa6d149 is in state STARTED 2025-07-12 15:36:44.831947 | orchestrator | 2025-07-12 15:36:44 | INFO  | Task 7b9cef4f-efa5-4bdd-b1f7-e78197256e13 is in state STARTED 2025-07-12 15:36:44.835796 | orchestrator | 2025-07-12 15:36:44 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:36:44.836229 | orchestrator | 2025-07-12 15:36:44 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:36:44.837786 | orchestrator | 2025-07-12 15:36:44 | INFO  | Task 1d517d98-4f1a-4f47-9f88-00adcbab40c3 is in state STARTED 2025-07-12 15:36:44.838395 | orchestrator | 2025-07-12 15:36:44 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:36:44.838893 | orchestrator | 2025-07-12 15:36:44 | INFO  | Task 0152e28c-2f5d-4d3c-bbf8-a5f77a73f39b is in state STARTED 2025-07-12 15:36:44.838914 | orchestrator | 2025-07-12 15:36:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:36:47.905748 | orchestrator | 2025-07-12 15:36:47 | INFO  | Task e076a688-450a-4159-9741-631d0fa6d149 is in state STARTED 2025-07-12 15:36:47.905839 | orchestrator | 2025-07-12 15:36:47 | INFO  | Task 7b9cef4f-efa5-4bdd-b1f7-e78197256e13 is in state STARTED 2025-07-12 15:36:47.905853 | orchestrator | 2025-07-12 15:36:47 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:36:47.905865 | orchestrator | 2025-07-12 15:36:47 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:36:47.905892 | orchestrator | 2025-07-12 15:36:47 | INFO  | Task 1d517d98-4f1a-4f47-9f88-00adcbab40c3 is in state STARTED 2025-07-12 15:36:47.905904 | orchestrator | 2025-07-12 15:36:47 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:36:47.911428 | orchestrator | 2025-07-12 15:36:47 | INFO  | Task 0152e28c-2f5d-4d3c-bbf8-a5f77a73f39b is in state STARTED 2025-07-12 15:36:47.911475 | orchestrator | 2025-07-12 15:36:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:36:50.941336 | orchestrator | 2025-07-12 15:36:50 | INFO  | Task e076a688-450a-4159-9741-631d0fa6d149 is in state STARTED 2025-07-12 15:36:50.941843 | orchestrator | 2025-07-12 15:36:50 | INFO  | Task 7b9cef4f-efa5-4bdd-b1f7-e78197256e13 is in state STARTED 2025-07-12 15:36:50.942541 | orchestrator | 2025-07-12 15:36:50 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:36:50.945313 | orchestrator | 2025-07-12 15:36:50 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:36:50.948459 | orchestrator | 2025-07-12 15:36:50 | INFO  | Task 1d517d98-4f1a-4f47-9f88-00adcbab40c3 is in state STARTED 2025-07-12 15:36:50.949072 | orchestrator | 2025-07-12 15:36:50 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:36:50.950191 | orchestrator | 2025-07-12 15:36:50 | INFO  | Task 0152e28c-2f5d-4d3c-bbf8-a5f77a73f39b is in state STARTED 2025-07-12 15:36:50.950213 | orchestrator | 2025-07-12 15:36:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:36:54.006880 | orchestrator | 2025-07-12 15:36:53 | INFO  | Task e076a688-450a-4159-9741-631d0fa6d149 is in state STARTED 2025-07-12 15:36:54.006986 | orchestrator | 2025-07-12 15:36:54 | INFO  | Task 7b9cef4f-efa5-4bdd-b1f7-e78197256e13 is in state STARTED 2025-07-12 15:36:54.007003 | orchestrator | 2025-07-12 15:36:54 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:36:54.007014 | orchestrator | 2025-07-12 15:36:54 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:36:54.007920 | orchestrator | 2025-07-12 15:36:54 | INFO  | Task 1d517d98-4f1a-4f47-9f88-00adcbab40c3 is in state STARTED 2025-07-12 15:36:54.007994 | orchestrator | 2025-07-12 15:36:54 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:36:54.009330 | orchestrator | 2025-07-12 15:36:54 | INFO  | Task 0152e28c-2f5d-4d3c-bbf8-a5f77a73f39b is in state STARTED 2025-07-12 15:36:54.009370 | orchestrator | 2025-07-12 15:36:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:36:57.079701 | orchestrator | 2025-07-12 15:36:57 | INFO  | Task e076a688-450a-4159-9741-631d0fa6d149 is in state STARTED 2025-07-12 15:36:57.081532 | orchestrator | 2025-07-12 15:36:57 | INFO  | Task 7b9cef4f-efa5-4bdd-b1f7-e78197256e13 is in state STARTED 2025-07-12 15:36:57.082714 | orchestrator | 2025-07-12 15:36:57 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:36:57.084730 | orchestrator | 2025-07-12 15:36:57 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:36:57.091578 | orchestrator | 2025-07-12 15:36:57 | INFO  | Task 1d517d98-4f1a-4f47-9f88-00adcbab40c3 is in state STARTED 2025-07-12 15:36:57.092269 | orchestrator | 2025-07-12 15:36:57 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:36:57.093143 | orchestrator | 2025-07-12 15:36:57 | INFO  | Task 0152e28c-2f5d-4d3c-bbf8-a5f77a73f39b is in state STARTED 2025-07-12 15:36:57.093168 | orchestrator | 2025-07-12 15:36:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:37:00.144117 | orchestrator | 2025-07-12 15:37:00 | INFO  | Task e076a688-450a-4159-9741-631d0fa6d149 is in state STARTED 2025-07-12 15:37:00.144199 | orchestrator | 2025-07-12 15:37:00 | INFO  | Task 7b9cef4f-efa5-4bdd-b1f7-e78197256e13 is in state STARTED 2025-07-12 15:37:00.145607 | orchestrator | 2025-07-12 15:37:00 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:37:00.148863 | orchestrator | 2025-07-12 15:37:00 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:37:00.152675 | orchestrator | 2025-07-12 15:37:00 | INFO  | Task 1d517d98-4f1a-4f47-9f88-00adcbab40c3 is in state STARTED 2025-07-12 15:37:00.154375 | orchestrator | 2025-07-12 15:37:00 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:37:00.156327 | orchestrator | 2025-07-12 15:37:00 | INFO  | Task 0152e28c-2f5d-4d3c-bbf8-a5f77a73f39b is in state STARTED 2025-07-12 15:37:00.156366 | orchestrator | 2025-07-12 15:37:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:37:03.245268 | orchestrator | 2025-07-12 15:37:03 | INFO  | Task e076a688-450a-4159-9741-631d0fa6d149 is in state STARTED 2025-07-12 15:37:03.245360 | orchestrator | 2025-07-12 15:37:03 | INFO  | Task 7b9cef4f-efa5-4bdd-b1f7-e78197256e13 is in state SUCCESS 2025-07-12 15:37:03.246890 | orchestrator | 2025-07-12 15:37:03.246933 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-07-12 15:37:03.246947 | orchestrator | 2025-07-12 15:37:03.246959 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-07-12 15:37:03.246970 | orchestrator | Saturday 12 July 2025 15:36:46 +0000 (0:00:01.149) 0:00:01.149 ********* 2025-07-12 15:37:03.246981 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:37:03.246993 | orchestrator | changed: [testbed-manager] 2025-07-12 15:37:03.247004 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:37:03.247014 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:37:03.247025 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:37:03.247035 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:37:03.247046 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:37:03.247057 | orchestrator | 2025-07-12 15:37:03.247068 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-07-12 15:37:03.247079 | orchestrator | Saturday 12 July 2025 15:36:50 +0000 (0:00:03.948) 0:00:05.098 ********* 2025-07-12 15:37:03.247090 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-07-12 15:37:03.247101 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-07-12 15:37:03.247112 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-07-12 15:37:03.247123 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-07-12 15:37:03.247134 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-07-12 15:37:03.247166 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-07-12 15:37:03.247177 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-07-12 15:37:03.247188 | orchestrator | 2025-07-12 15:37:03.247199 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-07-12 15:37:03.247209 | orchestrator | Saturday 12 July 2025 15:36:52 +0000 (0:00:02.072) 0:00:07.170 ********* 2025-07-12 15:37:03.247224 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-12 15:36:51.349414', 'end': '2025-07-12 15:36:51.361451', 'delta': '0:00:00.012037', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-12 15:37:03.247245 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-12 15:36:51.419905', 'end': '2025-07-12 15:36:51.427217', 'delta': '0:00:00.007312', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-12 15:37:03.247257 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-12 15:36:51.605366', 'end': '2025-07-12 15:36:51.616348', 'delta': '0:00:00.010982', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-12 15:37:03.247295 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-12 15:36:51.747605', 'end': '2025-07-12 15:36:51.755356', 'delta': '0:00:00.007751', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-12 15:37:03.247308 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-12 15:36:52.086704', 'end': '2025-07-12 15:36:52.093294', 'delta': '0:00:00.006590', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-12 15:37:03.247327 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-12 15:36:52.316790', 'end': '2025-07-12 15:36:52.324584', 'delta': '0:00:00.007794', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-12 15:37:03.247339 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-12 15:36:52.696894', 'end': '2025-07-12 15:36:52.707349', 'delta': '0:00:00.010455', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-12 15:37:03.247350 | orchestrator | 2025-07-12 15:37:03.247361 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-07-12 15:37:03.247372 | orchestrator | Saturday 12 July 2025 15:36:54 +0000 (0:00:02.014) 0:00:09.184 ********* 2025-07-12 15:37:03.247383 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-07-12 15:37:03.247420 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-07-12 15:37:03.247434 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-07-12 15:37:03.247444 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-07-12 15:37:03.247455 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-07-12 15:37:03.247467 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-07-12 15:37:03.247480 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-07-12 15:37:03.247493 | orchestrator | 2025-07-12 15:37:03.247506 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-07-12 15:37:03.247517 | orchestrator | Saturday 12 July 2025 15:36:57 +0000 (0:00:02.288) 0:00:11.472 ********* 2025-07-12 15:37:03.247530 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-07-12 15:37:03.247542 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-07-12 15:37:03.247554 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-07-12 15:37:03.247567 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-07-12 15:37:03.247579 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-07-12 15:37:03.247591 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-07-12 15:37:03.247604 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-07-12 15:37:03.247617 | orchestrator | 2025-07-12 15:37:03.247629 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:37:03.247664 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:37:03.247680 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:37:03.247693 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:37:03.247705 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:37:03.247718 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:37:03.247730 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:37:03.247742 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:37:03.247754 | orchestrator | 2025-07-12 15:37:03.247767 | orchestrator | 2025-07-12 15:37:03.247779 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:37:03.247792 | orchestrator | Saturday 12 July 2025 15:37:00 +0000 (0:00:03.252) 0:00:14.725 ********* 2025-07-12 15:37:03.247805 | orchestrator | =============================================================================== 2025-07-12 15:37:03.247817 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.95s 2025-07-12 15:37:03.247828 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.25s 2025-07-12 15:37:03.247839 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.29s 2025-07-12 15:37:03.247849 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.07s 2025-07-12 15:37:03.247860 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.01s 2025-07-12 15:37:03.253602 | orchestrator | 2025-07-12 15:37:03 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:37:03.260535 | orchestrator | 2025-07-12 15:37:03 | INFO  | Task 44fb341e-e9ea-4425-82ca-278d4a399a2c is in state STARTED 2025-07-12 15:37:03.266790 | orchestrator | 2025-07-12 15:37:03 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:37:03.268253 | orchestrator | 2025-07-12 15:37:03 | INFO  | Task 1d517d98-4f1a-4f47-9f88-00adcbab40c3 is in state STARTED 2025-07-12 15:37:03.268278 | orchestrator | 2025-07-12 15:37:03 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:37:03.268659 | orchestrator | 2025-07-12 15:37:03 | INFO  | Task 0152e28c-2f5d-4d3c-bbf8-a5f77a73f39b is in state STARTED 2025-07-12 15:37:03.268949 | orchestrator | 2025-07-12 15:37:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:37:06.320128 | orchestrator | 2025-07-12 15:37:06 | INFO  | Task e076a688-450a-4159-9741-631d0fa6d149 is in state STARTED 2025-07-12 15:37:06.324390 | orchestrator | 2025-07-12 15:37:06 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:37:06.324460 | orchestrator | 2025-07-12 15:37:06 | INFO  | Task 44fb341e-e9ea-4425-82ca-278d4a399a2c is in state STARTED 2025-07-12 15:37:06.326685 | orchestrator | 2025-07-12 15:37:06 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:37:06.327195 | orchestrator | 2025-07-12 15:37:06 | INFO  | Task 1d517d98-4f1a-4f47-9f88-00adcbab40c3 is in state STARTED 2025-07-12 15:37:06.329577 | orchestrator | 2025-07-12 15:37:06 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:37:06.331622 | orchestrator | 2025-07-12 15:37:06 | INFO  | Task 0152e28c-2f5d-4d3c-bbf8-a5f77a73f39b is in state STARTED 2025-07-12 15:37:06.331650 | orchestrator | 2025-07-12 15:37:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:37:09.397553 | orchestrator | 2025-07-12 15:37:09 | INFO  | Task e076a688-450a-4159-9741-631d0fa6d149 is in state STARTED 2025-07-12 15:37:09.403010 | orchestrator | 2025-07-12 15:37:09 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:37:09.403063 | orchestrator | 2025-07-12 15:37:09 | INFO  | Task 44fb341e-e9ea-4425-82ca-278d4a399a2c is in state STARTED 2025-07-12 15:37:09.408260 | orchestrator | 2025-07-12 15:37:09 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:37:09.409495 | orchestrator | 2025-07-12 15:37:09 | INFO  | Task 1d517d98-4f1a-4f47-9f88-00adcbab40c3 is in state STARTED 2025-07-12 15:37:09.412227 | orchestrator | 2025-07-12 15:37:09 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:37:09.416335 | orchestrator | 2025-07-12 15:37:09 | INFO  | Task 0152e28c-2f5d-4d3c-bbf8-a5f77a73f39b is in state STARTED 2025-07-12 15:37:09.416363 | orchestrator | 2025-07-12 15:37:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:37:12.458869 | orchestrator | 2025-07-12 15:37:12 | INFO  | Task e076a688-450a-4159-9741-631d0fa6d149 is in state STARTED 2025-07-12 15:37:12.461569 | orchestrator | 2025-07-12 15:37:12 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:37:12.466477 | orchestrator | 2025-07-12 15:37:12 | INFO  | Task 44fb341e-e9ea-4425-82ca-278d4a399a2c is in state STARTED 2025-07-12 15:37:12.468324 | orchestrator | 2025-07-12 15:37:12 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:37:12.472789 | orchestrator | 2025-07-12 15:37:12 | INFO  | Task 1d517d98-4f1a-4f47-9f88-00adcbab40c3 is in state STARTED 2025-07-12 15:37:12.472823 | orchestrator | 2025-07-12 15:37:12 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:37:12.474574 | orchestrator | 2025-07-12 15:37:12 | INFO  | Task 0152e28c-2f5d-4d3c-bbf8-a5f77a73f39b is in state STARTED 2025-07-12 15:37:12.474595 | orchestrator | 2025-07-12 15:37:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:37:15.532912 | orchestrator | 2025-07-12 15:37:15 | INFO  | Task e076a688-450a-4159-9741-631d0fa6d149 is in state STARTED 2025-07-12 15:37:15.533018 | orchestrator | 2025-07-12 15:37:15 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:37:15.533033 | orchestrator | 2025-07-12 15:37:15 | INFO  | Task 44fb341e-e9ea-4425-82ca-278d4a399a2c is in state STARTED 2025-07-12 15:37:15.533045 | orchestrator | 2025-07-12 15:37:15 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:37:15.533056 | orchestrator | 2025-07-12 15:37:15 | INFO  | Task 1d517d98-4f1a-4f47-9f88-00adcbab40c3 is in state STARTED 2025-07-12 15:37:15.533067 | orchestrator | 2025-07-12 15:37:15 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:37:15.536470 | orchestrator | 2025-07-12 15:37:15 | INFO  | Task 0152e28c-2f5d-4d3c-bbf8-a5f77a73f39b is in state STARTED 2025-07-12 15:37:15.536498 | orchestrator | 2025-07-12 15:37:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:37:18.583065 | orchestrator | 2025-07-12 15:37:18 | INFO  | Task e076a688-450a-4159-9741-631d0fa6d149 is in state STARTED 2025-07-12 15:37:18.583360 | orchestrator | 2025-07-12 15:37:18 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:37:18.594561 | orchestrator | 2025-07-12 15:37:18 | INFO  | Task 44fb341e-e9ea-4425-82ca-278d4a399a2c is in state STARTED 2025-07-12 15:37:18.594606 | orchestrator | 2025-07-12 15:37:18 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:37:18.594618 | orchestrator | 2025-07-12 15:37:18 | INFO  | Task 1d517d98-4f1a-4f47-9f88-00adcbab40c3 is in state STARTED 2025-07-12 15:37:18.594629 | orchestrator | 2025-07-12 15:37:18 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:37:18.597646 | orchestrator | 2025-07-12 15:37:18 | INFO  | Task 0152e28c-2f5d-4d3c-bbf8-a5f77a73f39b is in state STARTED 2025-07-12 15:37:18.597716 | orchestrator | 2025-07-12 15:37:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:37:21.654829 | orchestrator | 2025-07-12 15:37:21 | INFO  | Task e076a688-450a-4159-9741-631d0fa6d149 is in state SUCCESS 2025-07-12 15:37:21.657493 | orchestrator | 2025-07-12 15:37:21 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:37:21.660570 | orchestrator | 2025-07-12 15:37:21 | INFO  | Task 44fb341e-e9ea-4425-82ca-278d4a399a2c is in state STARTED 2025-07-12 15:37:21.661449 | orchestrator | 2025-07-12 15:37:21 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:37:21.662189 | orchestrator | 2025-07-12 15:37:21 | INFO  | Task 1d517d98-4f1a-4f47-9f88-00adcbab40c3 is in state STARTED 2025-07-12 15:37:21.663344 | orchestrator | 2025-07-12 15:37:21 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:37:21.667241 | orchestrator | 2025-07-12 15:37:21 | INFO  | Task 0152e28c-2f5d-4d3c-bbf8-a5f77a73f39b is in state STARTED 2025-07-12 15:37:21.667496 | orchestrator | 2025-07-12 15:37:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:37:24.697499 | orchestrator | 2025-07-12 15:37:24 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:37:24.699280 | orchestrator | 2025-07-12 15:37:24 | INFO  | Task 44fb341e-e9ea-4425-82ca-278d4a399a2c is in state STARTED 2025-07-12 15:37:24.699774 | orchestrator | 2025-07-12 15:37:24 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:37:24.700473 | orchestrator | 2025-07-12 15:37:24 | INFO  | Task 1d517d98-4f1a-4f47-9f88-00adcbab40c3 is in state STARTED 2025-07-12 15:37:24.701338 | orchestrator | 2025-07-12 15:37:24 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:37:24.702111 | orchestrator | 2025-07-12 15:37:24 | INFO  | Task 0152e28c-2f5d-4d3c-bbf8-a5f77a73f39b is in state STARTED 2025-07-12 15:37:24.702283 | orchestrator | 2025-07-12 15:37:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:37:27.757799 | orchestrator | 2025-07-12 15:37:27 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:37:27.769494 | orchestrator | 2025-07-12 15:37:27 | INFO  | Task 44fb341e-e9ea-4425-82ca-278d4a399a2c is in state STARTED 2025-07-12 15:37:27.773739 | orchestrator | 2025-07-12 15:37:27 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:37:27.775615 | orchestrator | 2025-07-12 15:37:27 | INFO  | Task 1d517d98-4f1a-4f47-9f88-00adcbab40c3 is in state STARTED 2025-07-12 15:37:27.779216 | orchestrator | 2025-07-12 15:37:27 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:37:27.784416 | orchestrator | 2025-07-12 15:37:27 | INFO  | Task 0152e28c-2f5d-4d3c-bbf8-a5f77a73f39b is in state STARTED 2025-07-12 15:37:27.784456 | orchestrator | 2025-07-12 15:37:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:37:30.840157 | orchestrator | 2025-07-12 15:37:30 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:37:30.840243 | orchestrator | 2025-07-12 15:37:30 | INFO  | Task 44fb341e-e9ea-4425-82ca-278d4a399a2c is in state STARTED 2025-07-12 15:37:30.840909 | orchestrator | 2025-07-12 15:37:30 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:37:30.841318 | orchestrator | 2025-07-12 15:37:30 | INFO  | Task 1d517d98-4f1a-4f47-9f88-00adcbab40c3 is in state STARTED 2025-07-12 15:37:30.841741 | orchestrator | 2025-07-12 15:37:30 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:37:30.842569 | orchestrator | 2025-07-12 15:37:30 | INFO  | Task 0152e28c-2f5d-4d3c-bbf8-a5f77a73f39b is in state STARTED 2025-07-12 15:37:30.842596 | orchestrator | 2025-07-12 15:37:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:37:33.891294 | orchestrator | 2025-07-12 15:37:33 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:37:33.891526 | orchestrator | 2025-07-12 15:37:33 | INFO  | Task 44fb341e-e9ea-4425-82ca-278d4a399a2c is in state STARTED 2025-07-12 15:37:33.893285 | orchestrator | 2025-07-12 15:37:33 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:37:33.898367 | orchestrator | 2025-07-12 15:37:33 | INFO  | Task 1d517d98-4f1a-4f47-9f88-00adcbab40c3 is in state STARTED 2025-07-12 15:37:33.898454 | orchestrator | 2025-07-12 15:37:33 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:37:33.899535 | orchestrator | 2025-07-12 15:37:33 | INFO  | Task 0152e28c-2f5d-4d3c-bbf8-a5f77a73f39b is in state STARTED 2025-07-12 15:37:33.899559 | orchestrator | 2025-07-12 15:37:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:37:36.942976 | orchestrator | 2025-07-12 15:37:36 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:37:36.947764 | orchestrator | 2025-07-12 15:37:36 | INFO  | Task 44fb341e-e9ea-4425-82ca-278d4a399a2c is in state STARTED 2025-07-12 15:37:36.947831 | orchestrator | 2025-07-12 15:37:36 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:37:36.956663 | orchestrator | 2025-07-12 15:37:36 | INFO  | Task 1d517d98-4f1a-4f47-9f88-00adcbab40c3 is in state STARTED 2025-07-12 15:37:36.956707 | orchestrator | 2025-07-12 15:37:36 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:37:36.956717 | orchestrator | 2025-07-12 15:37:36 | INFO  | Task 0152e28c-2f5d-4d3c-bbf8-a5f77a73f39b is in state SUCCESS 2025-07-12 15:37:36.956727 | orchestrator | 2025-07-12 15:37:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:37:40.004570 | orchestrator | 2025-07-12 15:37:39 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:37:40.004662 | orchestrator | 2025-07-12 15:37:39 | INFO  | Task 44fb341e-e9ea-4425-82ca-278d4a399a2c is in state STARTED 2025-07-12 15:37:40.004677 | orchestrator | 2025-07-12 15:37:39 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:37:40.004688 | orchestrator | 2025-07-12 15:37:39 | INFO  | Task 1d517d98-4f1a-4f47-9f88-00adcbab40c3 is in state STARTED 2025-07-12 15:37:40.004700 | orchestrator | 2025-07-12 15:37:40 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:37:40.004718 | orchestrator | 2025-07-12 15:37:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:37:43.057612 | orchestrator | 2025-07-12 15:37:43 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:37:43.060556 | orchestrator | 2025-07-12 15:37:43 | INFO  | Task 44fb341e-e9ea-4425-82ca-278d4a399a2c is in state STARTED 2025-07-12 15:37:43.062118 | orchestrator | 2025-07-12 15:37:43 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:37:43.064108 | orchestrator | 2025-07-12 15:37:43 | INFO  | Task 1d517d98-4f1a-4f47-9f88-00adcbab40c3 is in state STARTED 2025-07-12 15:37:43.065608 | orchestrator | 2025-07-12 15:37:43 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:37:43.065637 | orchestrator | 2025-07-12 15:37:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:37:46.107779 | orchestrator | 2025-07-12 15:37:46 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:37:46.111073 | orchestrator | 2025-07-12 15:37:46 | INFO  | Task 44fb341e-e9ea-4425-82ca-278d4a399a2c is in state STARTED 2025-07-12 15:37:46.112295 | orchestrator | 2025-07-12 15:37:46 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:37:46.113461 | orchestrator | 2025-07-12 15:37:46 | INFO  | Task 1d517d98-4f1a-4f47-9f88-00adcbab40c3 is in state SUCCESS 2025-07-12 15:37:46.115175 | orchestrator | 2025-07-12 15:37:46.115223 | orchestrator | 2025-07-12 15:37:46.115236 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-07-12 15:37:46.115248 | orchestrator | 2025-07-12 15:37:46.115259 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-07-12 15:37:46.115271 | orchestrator | Saturday 12 July 2025 15:36:46 +0000 (0:00:00.872) 0:00:00.872 ********* 2025-07-12 15:37:46.115282 | orchestrator | ok: [testbed-manager] => { 2025-07-12 15:37:46.115295 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-07-12 15:37:46.115308 | orchestrator | } 2025-07-12 15:37:46.115319 | orchestrator | 2025-07-12 15:37:46.115330 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-07-12 15:37:46.115342 | orchestrator | Saturday 12 July 2025 15:36:46 +0000 (0:00:00.466) 0:00:01.338 ********* 2025-07-12 15:37:46.115353 | orchestrator | ok: [testbed-manager] 2025-07-12 15:37:46.115364 | orchestrator | 2025-07-12 15:37:46.115405 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-07-12 15:37:46.115417 | orchestrator | Saturday 12 July 2025 15:36:48 +0000 (0:00:01.737) 0:00:03.076 ********* 2025-07-12 15:37:46.115428 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-07-12 15:37:46.115438 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-07-12 15:37:46.115450 | orchestrator | 2025-07-12 15:37:46.115460 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-07-12 15:37:46.115471 | orchestrator | Saturday 12 July 2025 15:36:49 +0000 (0:00:01.065) 0:00:04.142 ********* 2025-07-12 15:37:46.115482 | orchestrator | changed: [testbed-manager] 2025-07-12 15:37:46.115492 | orchestrator | 2025-07-12 15:37:46.115503 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-07-12 15:37:46.115513 | orchestrator | Saturday 12 July 2025 15:36:51 +0000 (0:00:01.595) 0:00:05.737 ********* 2025-07-12 15:37:46.115524 | orchestrator | changed: [testbed-manager] 2025-07-12 15:37:46.115534 | orchestrator | 2025-07-12 15:37:46.115545 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-07-12 15:37:46.115578 | orchestrator | Saturday 12 July 2025 15:36:53 +0000 (0:00:02.146) 0:00:07.884 ********* 2025-07-12 15:37:46.115589 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-07-12 15:37:46.115600 | orchestrator | ok: [testbed-manager] 2025-07-12 15:37:46.115611 | orchestrator | 2025-07-12 15:37:46.115621 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-07-12 15:37:46.115632 | orchestrator | Saturday 12 July 2025 15:37:17 +0000 (0:00:24.572) 0:00:32.457 ********* 2025-07-12 15:37:46.115663 | orchestrator | changed: [testbed-manager] 2025-07-12 15:37:46.115674 | orchestrator | 2025-07-12 15:37:46.115684 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:37:46.115695 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:37:46.115736 | orchestrator | 2025-07-12 15:37:46.115749 | orchestrator | 2025-07-12 15:37:46.115761 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:37:46.115772 | orchestrator | Saturday 12 July 2025 15:37:19 +0000 (0:00:02.068) 0:00:34.526 ********* 2025-07-12 15:37:46.115785 | orchestrator | =============================================================================== 2025-07-12 15:37:46.115797 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.57s 2025-07-12 15:37:46.115809 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.15s 2025-07-12 15:37:46.115821 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.07s 2025-07-12 15:37:46.115833 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.74s 2025-07-12 15:37:46.115845 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 1.60s 2025-07-12 15:37:46.115863 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.07s 2025-07-12 15:37:46.115881 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.47s 2025-07-12 15:37:46.116018 | orchestrator | 2025-07-12 15:37:46.116033 | orchestrator | 2025-07-12 15:37:46.116046 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-07-12 15:37:46.116058 | orchestrator | 2025-07-12 15:37:46.116071 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-07-12 15:37:46.116083 | orchestrator | Saturday 12 July 2025 15:36:46 +0000 (0:00:00.843) 0:00:00.843 ********* 2025-07-12 15:37:46.116096 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-07-12 15:37:46.116109 | orchestrator | 2025-07-12 15:37:46.116121 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-07-12 15:37:46.116131 | orchestrator | Saturday 12 July 2025 15:36:46 +0000 (0:00:00.667) 0:00:01.511 ********* 2025-07-12 15:37:46.116142 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-07-12 15:37:46.116152 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-07-12 15:37:46.116163 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-07-12 15:37:46.116174 | orchestrator | 2025-07-12 15:37:46.116193 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-07-12 15:37:46.116204 | orchestrator | Saturday 12 July 2025 15:36:48 +0000 (0:00:01.944) 0:00:03.456 ********* 2025-07-12 15:37:46.116214 | orchestrator | changed: [testbed-manager] 2025-07-12 15:37:46.116225 | orchestrator | 2025-07-12 15:37:46.116235 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-07-12 15:37:46.116246 | orchestrator | Saturday 12 July 2025 15:36:50 +0000 (0:00:01.380) 0:00:04.836 ********* 2025-07-12 15:37:46.116272 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-07-12 15:37:46.116284 | orchestrator | ok: [testbed-manager] 2025-07-12 15:37:46.116295 | orchestrator | 2025-07-12 15:37:46.116305 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-07-12 15:37:46.116316 | orchestrator | Saturday 12 July 2025 15:37:28 +0000 (0:00:37.874) 0:00:42.710 ********* 2025-07-12 15:37:46.116326 | orchestrator | changed: [testbed-manager] 2025-07-12 15:37:46.116337 | orchestrator | 2025-07-12 15:37:46.116347 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-07-12 15:37:46.116358 | orchestrator | Saturday 12 July 2025 15:37:29 +0000 (0:00:01.394) 0:00:44.105 ********* 2025-07-12 15:37:46.116423 | orchestrator | ok: [testbed-manager] 2025-07-12 15:37:46.116436 | orchestrator | 2025-07-12 15:37:46.116446 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-07-12 15:37:46.116457 | orchestrator | Saturday 12 July 2025 15:37:30 +0000 (0:00:00.956) 0:00:45.061 ********* 2025-07-12 15:37:46.116467 | orchestrator | changed: [testbed-manager] 2025-07-12 15:37:46.116478 | orchestrator | 2025-07-12 15:37:46.116488 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-07-12 15:37:46.116498 | orchestrator | Saturday 12 July 2025 15:37:31 +0000 (0:00:01.541) 0:00:46.602 ********* 2025-07-12 15:37:46.116509 | orchestrator | changed: [testbed-manager] 2025-07-12 15:37:46.116519 | orchestrator | 2025-07-12 15:37:46.116529 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-07-12 15:37:46.116540 | orchestrator | Saturday 12 July 2025 15:37:32 +0000 (0:00:00.780) 0:00:47.382 ********* 2025-07-12 15:37:46.116550 | orchestrator | changed: [testbed-manager] 2025-07-12 15:37:46.116560 | orchestrator | 2025-07-12 15:37:46.116571 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-07-12 15:37:46.116581 | orchestrator | Saturday 12 July 2025 15:37:33 +0000 (0:00:00.843) 0:00:48.226 ********* 2025-07-12 15:37:46.116592 | orchestrator | ok: [testbed-manager] 2025-07-12 15:37:46.116602 | orchestrator | 2025-07-12 15:37:46.116612 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:37:46.116623 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:37:46.116633 | orchestrator | 2025-07-12 15:37:46.116644 | orchestrator | 2025-07-12 15:37:46.116654 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:37:46.116664 | orchestrator | Saturday 12 July 2025 15:37:34 +0000 (0:00:00.465) 0:00:48.692 ********* 2025-07-12 15:37:46.116675 | orchestrator | =============================================================================== 2025-07-12 15:37:46.116685 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 37.87s 2025-07-12 15:37:46.116696 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.94s 2025-07-12 15:37:46.116706 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.54s 2025-07-12 15:37:46.116719 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.39s 2025-07-12 15:37:46.116738 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.38s 2025-07-12 15:37:46.116756 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.96s 2025-07-12 15:37:46.116767 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.84s 2025-07-12 15:37:46.116778 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.78s 2025-07-12 15:37:46.116789 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.67s 2025-07-12 15:37:46.116808 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.47s 2025-07-12 15:37:46.116819 | orchestrator | 2025-07-12 15:37:46.116829 | orchestrator | 2025-07-12 15:37:46.116840 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 15:37:46.116850 | orchestrator | 2025-07-12 15:37:46.116860 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 15:37:46.116871 | orchestrator | Saturday 12 July 2025 15:36:44 +0000 (0:00:00.468) 0:00:00.468 ********* 2025-07-12 15:37:46.116881 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-07-12 15:37:46.116891 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-07-12 15:37:46.116902 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-07-12 15:37:46.116912 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-07-12 15:37:46.116923 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-07-12 15:37:46.116933 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-07-12 15:37:46.116951 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-07-12 15:37:46.116962 | orchestrator | 2025-07-12 15:37:46.116973 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-07-12 15:37:46.116990 | orchestrator | 2025-07-12 15:37:46.117008 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-07-12 15:37:46.117027 | orchestrator | Saturday 12 July 2025 15:36:46 +0000 (0:00:02.257) 0:00:02.726 ********* 2025-07-12 15:37:46.117056 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:37:46.117070 | orchestrator | 2025-07-12 15:37:46.117081 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-07-12 15:37:46.117091 | orchestrator | Saturday 12 July 2025 15:36:49 +0000 (0:00:02.817) 0:00:05.544 ********* 2025-07-12 15:37:46.117102 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:37:46.117113 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:37:46.117123 | orchestrator | ok: [testbed-manager] 2025-07-12 15:37:46.117133 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:37:46.117144 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:37:46.117162 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:37:46.117174 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:37:46.117184 | orchestrator | 2025-07-12 15:37:46.117195 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-07-12 15:37:46.117205 | orchestrator | Saturday 12 July 2025 15:36:51 +0000 (0:00:01.958) 0:00:07.502 ********* 2025-07-12 15:37:46.117215 | orchestrator | ok: [testbed-manager] 2025-07-12 15:37:46.117226 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:37:46.117236 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:37:46.117247 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:37:46.117257 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:37:46.117267 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:37:46.117277 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:37:46.117287 | orchestrator | 2025-07-12 15:37:46.117298 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-07-12 15:37:46.117309 | orchestrator | Saturday 12 July 2025 15:36:55 +0000 (0:00:03.946) 0:00:11.449 ********* 2025-07-12 15:37:46.117319 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:37:46.117330 | orchestrator | changed: [testbed-manager] 2025-07-12 15:37:46.117340 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:37:46.117350 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:37:46.117361 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:37:46.117435 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:37:46.117453 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:37:46.117471 | orchestrator | 2025-07-12 15:37:46.117490 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-07-12 15:37:46.117519 | orchestrator | Saturday 12 July 2025 15:36:59 +0000 (0:00:03.558) 0:00:15.007 ********* 2025-07-12 15:37:46.117530 | orchestrator | changed: [testbed-manager] 2025-07-12 15:37:46.117552 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:37:46.117563 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:37:46.117574 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:37:46.117584 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:37:46.117594 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:37:46.117605 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:37:46.117615 | orchestrator | 2025-07-12 15:37:46.117626 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-07-12 15:37:46.117637 | orchestrator | Saturday 12 July 2025 15:37:08 +0000 (0:00:09.695) 0:00:24.703 ********* 2025-07-12 15:37:46.117647 | orchestrator | changed: [testbed-manager] 2025-07-12 15:37:46.117658 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:37:46.117669 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:37:46.117679 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:37:46.117698 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:37:46.117709 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:37:46.117784 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:37:46.117801 | orchestrator | 2025-07-12 15:37:46.117821 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-07-12 15:37:46.117837 | orchestrator | Saturday 12 July 2025 15:37:25 +0000 (0:00:16.610) 0:00:41.313 ********* 2025-07-12 15:37:46.117849 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:37:46.117862 | orchestrator | 2025-07-12 15:37:46.117873 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-07-12 15:37:46.117883 | orchestrator | Saturday 12 July 2025 15:37:26 +0000 (0:00:01.275) 0:00:42.588 ********* 2025-07-12 15:37:46.117894 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-07-12 15:37:46.117905 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-07-12 15:37:46.117915 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-07-12 15:37:46.117924 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-07-12 15:37:46.117934 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-07-12 15:37:46.117943 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-07-12 15:37:46.117952 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-07-12 15:37:46.117962 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-07-12 15:37:46.117971 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-07-12 15:37:46.117981 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-07-12 15:37:46.117990 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-07-12 15:37:46.117999 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-07-12 15:37:46.118009 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-07-12 15:37:46.118075 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-07-12 15:37:46.118085 | orchestrator | 2025-07-12 15:37:46.118095 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-07-12 15:37:46.118138 | orchestrator | Saturday 12 July 2025 15:37:31 +0000 (0:00:04.726) 0:00:47.315 ********* 2025-07-12 15:37:46.118149 | orchestrator | ok: [testbed-manager] 2025-07-12 15:37:46.118159 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:37:46.118168 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:37:46.118177 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:37:46.118192 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:37:46.118209 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:37:46.118225 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:37:46.118237 | orchestrator | 2025-07-12 15:37:46.118247 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-07-12 15:37:46.118256 | orchestrator | Saturday 12 July 2025 15:37:33 +0000 (0:00:01.776) 0:00:49.091 ********* 2025-07-12 15:37:46.118270 | orchestrator | changed: [testbed-manager] 2025-07-12 15:37:46.118280 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:37:46.118289 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:37:46.118298 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:37:46.118307 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:37:46.118316 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:37:46.118326 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:37:46.118335 | orchestrator | 2025-07-12 15:37:46.118344 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-07-12 15:37:46.118363 | orchestrator | Saturday 12 July 2025 15:37:35 +0000 (0:00:02.324) 0:00:51.415 ********* 2025-07-12 15:37:46.118399 | orchestrator | ok: [testbed-manager] 2025-07-12 15:37:46.118410 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:37:46.118419 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:37:46.118428 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:37:46.118446 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:37:46.118455 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:37:46.118465 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:37:46.118474 | orchestrator | 2025-07-12 15:37:46.118483 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-07-12 15:37:46.118493 | orchestrator | Saturday 12 July 2025 15:37:37 +0000 (0:00:01.584) 0:00:52.999 ********* 2025-07-12 15:37:46.118502 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:37:46.118512 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:37:46.118521 | orchestrator | ok: [testbed-manager] 2025-07-12 15:37:46.118530 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:37:46.118539 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:37:46.118548 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:37:46.118593 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:37:46.118603 | orchestrator | 2025-07-12 15:37:46.118613 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-07-12 15:37:46.118623 | orchestrator | Saturday 12 July 2025 15:37:38 +0000 (0:00:01.838) 0:00:54.838 ********* 2025-07-12 15:37:46.118632 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-07-12 15:37:46.118643 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:37:46.118654 | orchestrator | 2025-07-12 15:37:46.118663 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-07-12 15:37:46.118672 | orchestrator | Saturday 12 July 2025 15:37:40 +0000 (0:00:01.265) 0:00:56.104 ********* 2025-07-12 15:37:46.118682 | orchestrator | changed: [testbed-manager] 2025-07-12 15:37:46.118691 | orchestrator | 2025-07-12 15:37:46.118700 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-07-12 15:37:46.118710 | orchestrator | Saturday 12 July 2025 15:37:41 +0000 (0:00:01.774) 0:00:57.878 ********* 2025-07-12 15:37:46.118719 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:37:46.118728 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:37:46.118738 | orchestrator | changed: [testbed-manager] 2025-07-12 15:37:46.118747 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:37:46.118756 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:37:46.118766 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:37:46.118775 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:37:46.118784 | orchestrator | 2025-07-12 15:37:46.118794 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:37:46.118803 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:37:46.118813 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:37:46.118823 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:37:46.118832 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:37:46.118841 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:37:46.118851 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:37:46.118860 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:37:46.118869 | orchestrator | 2025-07-12 15:37:46.118879 | orchestrator | 2025-07-12 15:37:46.118888 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:37:46.118906 | orchestrator | Saturday 12 July 2025 15:37:45 +0000 (0:00:03.305) 0:01:01.184 ********* 2025-07-12 15:37:46.118915 | orchestrator | =============================================================================== 2025-07-12 15:37:46.118925 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 16.61s 2025-07-12 15:37:46.118934 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.70s 2025-07-12 15:37:46.118943 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.73s 2025-07-12 15:37:46.118953 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.95s 2025-07-12 15:37:46.118962 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.56s 2025-07-12 15:37:46.118971 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.31s 2025-07-12 15:37:46.118992 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.82s 2025-07-12 15:37:46.119009 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.32s 2025-07-12 15:37:46.119027 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.26s 2025-07-12 15:37:46.119042 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.96s 2025-07-12 15:37:46.119052 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.84s 2025-07-12 15:37:46.119067 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.78s 2025-07-12 15:37:46.119116 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.77s 2025-07-12 15:37:46.119126 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.58s 2025-07-12 15:37:46.119135 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.28s 2025-07-12 15:37:46.119145 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.27s 2025-07-12 15:37:46.119154 | orchestrator | 2025-07-12 15:37:46 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:37:46.119164 | orchestrator | 2025-07-12 15:37:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:37:49.154089 | orchestrator | 2025-07-12 15:37:49 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:37:49.155285 | orchestrator | 2025-07-12 15:37:49 | INFO  | Task 44fb341e-e9ea-4425-82ca-278d4a399a2c is in state STARTED 2025-07-12 15:37:49.157034 | orchestrator | 2025-07-12 15:37:49 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:37:49.159532 | orchestrator | 2025-07-12 15:37:49 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:37:49.159574 | orchestrator | 2025-07-12 15:37:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:37:52.192894 | orchestrator | 2025-07-12 15:37:52 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:37:52.193276 | orchestrator | 2025-07-12 15:37:52 | INFO  | Task 44fb341e-e9ea-4425-82ca-278d4a399a2c is in state STARTED 2025-07-12 15:37:52.194332 | orchestrator | 2025-07-12 15:37:52 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:37:52.194998 | orchestrator | 2025-07-12 15:37:52 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:37:52.195022 | orchestrator | 2025-07-12 15:37:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:37:55.239473 | orchestrator | 2025-07-12 15:37:55 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:37:55.241287 | orchestrator | 2025-07-12 15:37:55 | INFO  | Task 44fb341e-e9ea-4425-82ca-278d4a399a2c is in state STARTED 2025-07-12 15:37:55.242491 | orchestrator | 2025-07-12 15:37:55 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:37:55.243997 | orchestrator | 2025-07-12 15:37:55 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:37:55.244020 | orchestrator | 2025-07-12 15:37:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:37:58.282931 | orchestrator | 2025-07-12 15:37:58 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:37:58.283033 | orchestrator | 2025-07-12 15:37:58 | INFO  | Task 44fb341e-e9ea-4425-82ca-278d4a399a2c is in state STARTED 2025-07-12 15:37:58.283678 | orchestrator | 2025-07-12 15:37:58 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:37:58.284668 | orchestrator | 2025-07-12 15:37:58 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:37:58.284691 | orchestrator | 2025-07-12 15:37:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:38:01.327139 | orchestrator | 2025-07-12 15:38:01 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:38:01.328336 | orchestrator | 2025-07-12 15:38:01 | INFO  | Task 44fb341e-e9ea-4425-82ca-278d4a399a2c is in state STARTED 2025-07-12 15:38:01.330269 | orchestrator | 2025-07-12 15:38:01 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:38:01.332321 | orchestrator | 2025-07-12 15:38:01 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:38:01.332738 | orchestrator | 2025-07-12 15:38:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:38:04.392429 | orchestrator | 2025-07-12 15:38:04 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:38:04.393220 | orchestrator | 2025-07-12 15:38:04 | INFO  | Task 44fb341e-e9ea-4425-82ca-278d4a399a2c is in state STARTED 2025-07-12 15:38:04.396499 | orchestrator | 2025-07-12 15:38:04 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:38:04.397597 | orchestrator | 2025-07-12 15:38:04 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:38:04.397648 | orchestrator | 2025-07-12 15:38:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:38:07.442093 | orchestrator | 2025-07-12 15:38:07 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:38:07.443574 | orchestrator | 2025-07-12 15:38:07 | INFO  | Task 44fb341e-e9ea-4425-82ca-278d4a399a2c is in state STARTED 2025-07-12 15:38:07.445008 | orchestrator | 2025-07-12 15:38:07 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:38:07.446141 | orchestrator | 2025-07-12 15:38:07 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:38:07.446421 | orchestrator | 2025-07-12 15:38:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:38:10.496955 | orchestrator | 2025-07-12 15:38:10 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:38:10.498320 | orchestrator | 2025-07-12 15:38:10 | INFO  | Task 44fb341e-e9ea-4425-82ca-278d4a399a2c is in state STARTED 2025-07-12 15:38:10.505303 | orchestrator | 2025-07-12 15:38:10 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:38:10.509974 | orchestrator | 2025-07-12 15:38:10 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:38:10.510007 | orchestrator | 2025-07-12 15:38:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:38:13.548930 | orchestrator | 2025-07-12 15:38:13 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:38:13.550985 | orchestrator | 2025-07-12 15:38:13 | INFO  | Task 44fb341e-e9ea-4425-82ca-278d4a399a2c is in state STARTED 2025-07-12 15:38:13.554401 | orchestrator | 2025-07-12 15:38:13 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:38:13.556554 | orchestrator | 2025-07-12 15:38:13 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:38:13.556760 | orchestrator | 2025-07-12 15:38:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:38:16.599537 | orchestrator | 2025-07-12 15:38:16 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:38:16.600422 | orchestrator | 2025-07-12 15:38:16 | INFO  | Task 44fb341e-e9ea-4425-82ca-278d4a399a2c is in state STARTED 2025-07-12 15:38:16.601400 | orchestrator | 2025-07-12 15:38:16 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:38:16.602010 | orchestrator | 2025-07-12 15:38:16 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:38:16.602082 | orchestrator | 2025-07-12 15:38:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:38:19.648310 | orchestrator | 2025-07-12 15:38:19 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:38:19.650619 | orchestrator | 2025-07-12 15:38:19 | INFO  | Task 44fb341e-e9ea-4425-82ca-278d4a399a2c is in state STARTED 2025-07-12 15:38:19.652326 | orchestrator | 2025-07-12 15:38:19 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:38:19.653725 | orchestrator | 2025-07-12 15:38:19 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:38:19.653813 | orchestrator | 2025-07-12 15:38:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:38:22.706646 | orchestrator | 2025-07-12 15:38:22 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:38:22.708799 | orchestrator | 2025-07-12 15:38:22 | INFO  | Task 44fb341e-e9ea-4425-82ca-278d4a399a2c is in state STARTED 2025-07-12 15:38:22.711089 | orchestrator | 2025-07-12 15:38:22 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:38:22.712758 | orchestrator | 2025-07-12 15:38:22 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:38:22.712923 | orchestrator | 2025-07-12 15:38:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:38:25.757216 | orchestrator | 2025-07-12 15:38:25 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:38:25.759184 | orchestrator | 2025-07-12 15:38:25 | INFO  | Task 44fb341e-e9ea-4425-82ca-278d4a399a2c is in state STARTED 2025-07-12 15:38:25.761726 | orchestrator | 2025-07-12 15:38:25 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:38:25.763694 | orchestrator | 2025-07-12 15:38:25 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:38:25.763731 | orchestrator | 2025-07-12 15:38:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:38:28.802555 | orchestrator | 2025-07-12 15:38:28 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:38:28.802656 | orchestrator | 2025-07-12 15:38:28 | INFO  | Task 44fb341e-e9ea-4425-82ca-278d4a399a2c is in state STARTED 2025-07-12 15:38:28.803514 | orchestrator | 2025-07-12 15:38:28 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:38:28.804867 | orchestrator | 2025-07-12 15:38:28 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:38:28.804935 | orchestrator | 2025-07-12 15:38:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:38:31.856486 | orchestrator | 2025-07-12 15:38:31 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:38:31.858414 | orchestrator | 2025-07-12 15:38:31 | INFO  | Task 44fb341e-e9ea-4425-82ca-278d4a399a2c is in state STARTED 2025-07-12 15:38:31.860607 | orchestrator | 2025-07-12 15:38:31 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:38:31.861533 | orchestrator | 2025-07-12 15:38:31 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:38:31.861645 | orchestrator | 2025-07-12 15:38:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:38:34.918924 | orchestrator | 2025-07-12 15:38:34 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:38:34.926710 | orchestrator | 2025-07-12 15:38:34 | INFO  | Task 44fb341e-e9ea-4425-82ca-278d4a399a2c is in state STARTED 2025-07-12 15:38:34.929670 | orchestrator | 2025-07-12 15:38:34 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:38:34.932545 | orchestrator | 2025-07-12 15:38:34 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:38:34.932636 | orchestrator | 2025-07-12 15:38:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:38:37.981648 | orchestrator | 2025-07-12 15:38:37 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:38:37.982326 | orchestrator | 2025-07-12 15:38:37 | INFO  | Task 44fb341e-e9ea-4425-82ca-278d4a399a2c is in state STARTED 2025-07-12 15:38:37.983590 | orchestrator | 2025-07-12 15:38:37 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:38:37.983939 | orchestrator | 2025-07-12 15:38:37 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:38:37.984002 | orchestrator | 2025-07-12 15:38:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:38:41.032576 | orchestrator | 2025-07-12 15:38:41 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:38:41.034469 | orchestrator | 2025-07-12 15:38:41 | INFO  | Task 44fb341e-e9ea-4425-82ca-278d4a399a2c is in state STARTED 2025-07-12 15:38:41.036203 | orchestrator | 2025-07-12 15:38:41 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:38:41.038875 | orchestrator | 2025-07-12 15:38:41 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:38:41.038901 | orchestrator | 2025-07-12 15:38:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:38:44.071614 | orchestrator | 2025-07-12 15:38:44 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:38:44.073861 | orchestrator | 2025-07-12 15:38:44 | INFO  | Task 44fb341e-e9ea-4425-82ca-278d4a399a2c is in state STARTED 2025-07-12 15:38:44.074956 | orchestrator | 2025-07-12 15:38:44 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:38:44.076313 | orchestrator | 2025-07-12 15:38:44 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:38:44.076398 | orchestrator | 2025-07-12 15:38:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:38:47.109721 | orchestrator | 2025-07-12 15:38:47 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:38:47.110764 | orchestrator | 2025-07-12 15:38:47 | INFO  | Task 44fb341e-e9ea-4425-82ca-278d4a399a2c is in state STARTED 2025-07-12 15:38:47.112010 | orchestrator | 2025-07-12 15:38:47 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:38:47.114262 | orchestrator | 2025-07-12 15:38:47 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:38:47.114512 | orchestrator | 2025-07-12 15:38:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:38:50.163077 | orchestrator | 2025-07-12 15:38:50 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:38:50.164013 | orchestrator | 2025-07-12 15:38:50 | INFO  | Task 44fb341e-e9ea-4425-82ca-278d4a399a2c is in state STARTED 2025-07-12 15:38:50.166254 | orchestrator | 2025-07-12 15:38:50 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:38:50.166722 | orchestrator | 2025-07-12 15:38:50 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:38:50.166918 | orchestrator | 2025-07-12 15:38:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:38:53.207077 | orchestrator | 2025-07-12 15:38:53 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:38:53.208320 | orchestrator | 2025-07-12 15:38:53 | INFO  | Task 44fb341e-e9ea-4425-82ca-278d4a399a2c is in state STARTED 2025-07-12 15:38:53.208399 | orchestrator | 2025-07-12 15:38:53 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:38:53.209239 | orchestrator | 2025-07-12 15:38:53 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:38:53.209851 | orchestrator | 2025-07-12 15:38:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:38:56.250704 | orchestrator | 2025-07-12 15:38:56 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:38:56.251296 | orchestrator | 2025-07-12 15:38:56 | INFO  | Task 44fb341e-e9ea-4425-82ca-278d4a399a2c is in state SUCCESS 2025-07-12 15:38:56.253219 | orchestrator | 2025-07-12 15:38:56 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:38:56.254536 | orchestrator | 2025-07-12 15:38:56 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:38:56.254827 | orchestrator | 2025-07-12 15:38:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:38:59.298485 | orchestrator | 2025-07-12 15:38:59 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:38:59.299137 | orchestrator | 2025-07-12 15:38:59 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:38:59.301809 | orchestrator | 2025-07-12 15:38:59 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:38:59.301839 | orchestrator | 2025-07-12 15:38:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:39:02.342824 | orchestrator | 2025-07-12 15:39:02 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:39:02.344998 | orchestrator | 2025-07-12 15:39:02 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:39:02.347720 | orchestrator | 2025-07-12 15:39:02 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:39:02.347757 | orchestrator | 2025-07-12 15:39:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:39:05.392589 | orchestrator | 2025-07-12 15:39:05 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:39:05.394681 | orchestrator | 2025-07-12 15:39:05 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:39:05.396814 | orchestrator | 2025-07-12 15:39:05 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:39:05.396870 | orchestrator | 2025-07-12 15:39:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:39:08.445079 | orchestrator | 2025-07-12 15:39:08 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:39:08.448578 | orchestrator | 2025-07-12 15:39:08 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:39:08.450462 | orchestrator | 2025-07-12 15:39:08 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:39:08.450494 | orchestrator | 2025-07-12 15:39:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:39:11.494106 | orchestrator | 2025-07-12 15:39:11 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:39:11.494584 | orchestrator | 2025-07-12 15:39:11 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state STARTED 2025-07-12 15:39:11.496381 | orchestrator | 2025-07-12 15:39:11 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:39:11.497079 | orchestrator | 2025-07-12 15:39:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:39:14.555714 | orchestrator | 2025-07-12 15:39:14 | INFO  | Task fcb04d3d-5159-42a1-8b04-73f638ba57e0 is in state STARTED 2025-07-12 15:39:14.558258 | orchestrator | 2025-07-12 15:39:14 | INFO  | Task caf4e926-7171-4cf0-afea-ebb15a1dec9c is in state STARTED 2025-07-12 15:39:14.560412 | orchestrator | 2025-07-12 15:39:14 | INFO  | Task 5a607406-90b3-485e-a428-013e347b8461 is in state STARTED 2025-07-12 15:39:14.564104 | orchestrator | 2025-07-12 15:39:14 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:39:14.564144 | orchestrator | 2025-07-12 15:39:14 | INFO  | Task 3363ac09-162e-4884-bf70-ccfb73b056eb is in state SUCCESS 2025-07-12 15:39:14.566644 | orchestrator | 2025-07-12 15:39:14.566796 | orchestrator | 2025-07-12 15:39:14.566813 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-07-12 15:39:14.566825 | orchestrator | 2025-07-12 15:39:14.566836 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-07-12 15:39:14.566847 | orchestrator | Saturday 12 July 2025 15:37:07 +0000 (0:00:00.353) 0:00:00.353 ********* 2025-07-12 15:39:14.566859 | orchestrator | ok: [testbed-manager] 2025-07-12 15:39:14.566870 | orchestrator | 2025-07-12 15:39:14.566881 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-07-12 15:39:14.566893 | orchestrator | Saturday 12 July 2025 15:37:08 +0000 (0:00:01.015) 0:00:01.368 ********* 2025-07-12 15:39:14.566948 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-07-12 15:39:14.566960 | orchestrator | 2025-07-12 15:39:14.566971 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-07-12 15:39:14.566982 | orchestrator | Saturday 12 July 2025 15:37:09 +0000 (0:00:00.830) 0:00:02.198 ********* 2025-07-12 15:39:14.566992 | orchestrator | changed: [testbed-manager] 2025-07-12 15:39:14.567003 | orchestrator | 2025-07-12 15:39:14.567014 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-07-12 15:39:14.567024 | orchestrator | Saturday 12 July 2025 15:37:10 +0000 (0:00:01.860) 0:00:04.059 ********* 2025-07-12 15:39:14.567061 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-07-12 15:39:14.567073 | orchestrator | ok: [testbed-manager] 2025-07-12 15:39:14.567083 | orchestrator | 2025-07-12 15:39:14.567094 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-07-12 15:39:14.567105 | orchestrator | Saturday 12 July 2025 15:38:49 +0000 (0:01:38.601) 0:01:42.660 ********* 2025-07-12 15:39:14.567115 | orchestrator | changed: [testbed-manager] 2025-07-12 15:39:14.567125 | orchestrator | 2025-07-12 15:39:14.567136 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:39:14.567146 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:39:14.567229 | orchestrator | 2025-07-12 15:39:14.567243 | orchestrator | 2025-07-12 15:39:14.567254 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:39:14.567265 | orchestrator | Saturday 12 July 2025 15:38:53 +0000 (0:00:04.343) 0:01:47.004 ********* 2025-07-12 15:39:14.567275 | orchestrator | =============================================================================== 2025-07-12 15:39:14.567286 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 98.60s 2025-07-12 15:39:14.567296 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.34s 2025-07-12 15:39:14.567307 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.86s 2025-07-12 15:39:14.567317 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.02s 2025-07-12 15:39:14.567360 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.83s 2025-07-12 15:39:14.567371 | orchestrator | 2025-07-12 15:39:14.567382 | orchestrator | 2025-07-12 15:39:14.567392 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-07-12 15:39:14.567403 | orchestrator | 2025-07-12 15:39:14.567414 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-07-12 15:39:14.567424 | orchestrator | Saturday 12 July 2025 15:36:38 +0000 (0:00:00.227) 0:00:00.227 ********* 2025-07-12 15:39:14.567435 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:39:14.567447 | orchestrator | 2025-07-12 15:39:14.567458 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-07-12 15:39:14.567468 | orchestrator | Saturday 12 July 2025 15:36:39 +0000 (0:00:01.169) 0:00:01.397 ********* 2025-07-12 15:39:14.567479 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-12 15:39:14.567489 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-12 15:39:14.567500 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-12 15:39:14.567510 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-12 15:39:14.567521 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-12 15:39:14.567532 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-12 15:39:14.567542 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-12 15:39:14.567553 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-12 15:39:14.567563 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-12 15:39:14.567574 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-12 15:39:14.567584 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-12 15:39:14.567595 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-12 15:39:14.567607 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-12 15:39:14.567617 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-12 15:39:14.567628 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-12 15:39:14.567638 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-12 15:39:14.567663 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-12 15:39:14.567675 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-12 15:39:14.567696 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-12 15:39:14.567716 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-12 15:39:14.567727 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-12 15:39:14.567737 | orchestrator | 2025-07-12 15:39:14.567748 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-07-12 15:39:14.567759 | orchestrator | Saturday 12 July 2025 15:36:43 +0000 (0:00:04.247) 0:00:05.644 ********* 2025-07-12 15:39:14.567769 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:39:14.567781 | orchestrator | 2025-07-12 15:39:14.567792 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-07-12 15:39:14.567802 | orchestrator | Saturday 12 July 2025 15:36:44 +0000 (0:00:01.228) 0:00:06.872 ********* 2025-07-12 15:39:14.567817 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 15:39:14.567833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 15:39:14.567845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 15:39:14.567857 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.567873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 15:39:14.567902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.567921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.567932 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 15:39:14.567947 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.567960 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 15:39:14.567971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.567987 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 15:39:14.567998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.568028 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.568040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.568051 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.568063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.568074 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.568085 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.568101 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.568118 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.568129 | orchestrator | 2025-07-12 15:39:14.568141 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-07-12 15:39:14.568157 | orchestrator | Saturday 12 July 2025 15:36:49 +0000 (0:00:05.084) 0:00:11.958 ********* 2025-07-12 15:39:14.568169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 15:39:14.568181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:39:14.568192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:39:14.568204 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 15:39:14.568215 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:39:14.568227 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:39:14.568244 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:39:14.568260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 15:39:14.568285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:39:14.568296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:39:14.568308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 15:39:14.568319 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:39:14.568355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:39:14.568373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:39:14.568391 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:39:14.568402 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:39:14.568419 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 15:39:14.568448 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:39:14.568467 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:39:14.568478 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:39:14.568489 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 15:39:14.568501 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:39:14.568512 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:39:14.568523 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:39:14.568534 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 15:39:14.568545 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:39:14.568564 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:39:14.568575 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:39:14.568586 | orchestrator | 2025-07-12 15:39:14.568601 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-07-12 15:39:14.568612 | orchestrator | Saturday 12 July 2025 15:36:51 +0000 (0:00:01.169) 0:00:13.127 ********* 2025-07-12 15:39:14.568624 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 15:39:14.568641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 15:39:14.568653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:39:14.568665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:39:14.568676 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:39:14.568688 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:39:14.568704 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:39:14.568715 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:39:14.568726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 15:39:14.568738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:39:14.568757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:39:14.568769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 15:39:14.568785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:39:14.568796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:39:14.568808 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 15:39:14.568826 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:39:14.568842 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:39:14.568854 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:39:14.568864 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:39:14.568875 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:39:14.568886 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 15:39:14.568904 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:39:14.568916 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:39:14.568928 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:39:14.568939 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 15:39:14.568951 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:39:14.568968 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:39:14.568979 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:39:14.568990 | orchestrator | 2025-07-12 15:39:14.569000 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-07-12 15:39:14.569011 | orchestrator | Saturday 12 July 2025 15:36:53 +0000 (0:00:02.416) 0:00:15.543 ********* 2025-07-12 15:39:14.569022 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:39:14.569032 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:39:14.569043 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:39:14.569053 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:39:14.569064 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:39:14.569074 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:39:14.569085 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:39:14.569095 | orchestrator | 2025-07-12 15:39:14.569106 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-07-12 15:39:14.569117 | orchestrator | Saturday 12 July 2025 15:36:54 +0000 (0:00:00.774) 0:00:16.318 ********* 2025-07-12 15:39:14.569127 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:39:14.569138 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:39:14.569153 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:39:14.569163 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:39:14.569174 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:39:14.569185 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:39:14.569195 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:39:14.569205 | orchestrator | 2025-07-12 15:39:14.569216 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-07-12 15:39:14.569227 | orchestrator | Saturday 12 July 2025 15:36:55 +0000 (0:00:01.140) 0:00:17.458 ********* 2025-07-12 15:39:14.569250 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 15:39:14.569263 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.569274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 15:39:14.569295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 15:39:14.569307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 15:39:14.569318 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 15:39:14.569357 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.569383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.569408 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 15:39:14.569421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.569439 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 15:39:14.569451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.569462 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.569473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.569489 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.569507 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.569519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.569536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.569548 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.569559 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.569571 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.569582 | orchestrator | 2025-07-12 15:39:14.569593 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-07-12 15:39:14.569604 | orchestrator | Saturday 12 July 2025 15:37:01 +0000 (0:00:05.602) 0:00:23.060 ********* 2025-07-12 15:39:14.569615 | orchestrator | [WARNING]: Skipped 2025-07-12 15:39:14.569626 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-07-12 15:39:14.569637 | orchestrator | to this access issue: 2025-07-12 15:39:14.569647 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-07-12 15:39:14.569658 | orchestrator | directory 2025-07-12 15:39:14.569669 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 15:39:14.569680 | orchestrator | 2025-07-12 15:39:14.569690 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-07-12 15:39:14.569701 | orchestrator | Saturday 12 July 2025 15:37:03 +0000 (0:00:02.204) 0:00:25.265 ********* 2025-07-12 15:39:14.569711 | orchestrator | [WARNING]: Skipped 2025-07-12 15:39:14.569722 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-07-12 15:39:14.569737 | orchestrator | to this access issue: 2025-07-12 15:39:14.569748 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-07-12 15:39:14.569759 | orchestrator | directory 2025-07-12 15:39:14.569770 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 15:39:14.569781 | orchestrator | 2025-07-12 15:39:14.569792 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-07-12 15:39:14.569803 | orchestrator | Saturday 12 July 2025 15:37:04 +0000 (0:00:01.474) 0:00:26.740 ********* 2025-07-12 15:39:14.569813 | orchestrator | [WARNING]: Skipped 2025-07-12 15:39:14.569824 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-07-12 15:39:14.569840 | orchestrator | to this access issue: 2025-07-12 15:39:14.569851 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-07-12 15:39:14.569862 | orchestrator | directory 2025-07-12 15:39:14.569873 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 15:39:14.569884 | orchestrator | 2025-07-12 15:39:14.569900 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-07-12 15:39:14.569911 | orchestrator | Saturday 12 July 2025 15:37:05 +0000 (0:00:00.952) 0:00:27.692 ********* 2025-07-12 15:39:14.569922 | orchestrator | [WARNING]: Skipped 2025-07-12 15:39:14.569933 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-07-12 15:39:14.569944 | orchestrator | to this access issue: 2025-07-12 15:39:14.569955 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-07-12 15:39:14.569965 | orchestrator | directory 2025-07-12 15:39:14.569976 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 15:39:14.569987 | orchestrator | 2025-07-12 15:39:14.569998 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-07-12 15:39:14.570008 | orchestrator | Saturday 12 July 2025 15:37:06 +0000 (0:00:00.925) 0:00:28.617 ********* 2025-07-12 15:39:14.570070 | orchestrator | changed: [testbed-manager] 2025-07-12 15:39:14.570084 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:39:14.570094 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:39:14.570105 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:39:14.570115 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:39:14.570126 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:39:14.570137 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:39:14.570147 | orchestrator | 2025-07-12 15:39:14.570158 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-07-12 15:39:14.570169 | orchestrator | Saturday 12 July 2025 15:37:11 +0000 (0:00:04.763) 0:00:33.380 ********* 2025-07-12 15:39:14.570180 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-12 15:39:14.570190 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-12 15:39:14.570202 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-12 15:39:14.570213 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-12 15:39:14.570223 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-12 15:39:14.570234 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-12 15:39:14.570244 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-12 15:39:14.570255 | orchestrator | 2025-07-12 15:39:14.570266 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-07-12 15:39:14.570277 | orchestrator | Saturday 12 July 2025 15:37:15 +0000 (0:00:03.830) 0:00:37.211 ********* 2025-07-12 15:39:14.570287 | orchestrator | changed: [testbed-manager] 2025-07-12 15:39:14.570298 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:39:14.570312 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:39:14.570366 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:39:14.570385 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:39:14.570402 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:39:14.570419 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:39:14.570436 | orchestrator | 2025-07-12 15:39:14.570454 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-07-12 15:39:14.570472 | orchestrator | Saturday 12 July 2025 15:37:17 +0000 (0:00:02.185) 0:00:39.396 ********* 2025-07-12 15:39:14.570492 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 15:39:14.570533 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:39:14.570546 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 15:39:14.570566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:39:14.570578 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.570603 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 15:39:14.570614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:39:14.570625 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.570643 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 15:39:14.570659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:39:14.570677 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.570689 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 15:39:14.570700 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:39:14.570711 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.570722 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 15:39:14.570740 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:39:14.570751 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 15:39:14.570762 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:39:14.570783 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.570795 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.570806 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.570817 | orchestrator | 2025-07-12 15:39:14.570828 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-07-12 15:39:14.570839 | orchestrator | Saturday 12 July 2025 15:37:19 +0000 (0:00:02.536) 0:00:41.933 ********* 2025-07-12 15:39:14.570850 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-12 15:39:14.570865 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-12 15:39:14.570877 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-12 15:39:14.570894 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-12 15:39:14.570904 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-12 15:39:14.570915 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-12 15:39:14.570926 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-12 15:39:14.570936 | orchestrator | 2025-07-12 15:39:14.570947 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-07-12 15:39:14.570958 | orchestrator | Saturday 12 July 2025 15:37:23 +0000 (0:00:03.124) 0:00:45.057 ********* 2025-07-12 15:39:14.570969 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-12 15:39:14.570979 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-12 15:39:14.570990 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-12 15:39:14.571001 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-12 15:39:14.571011 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-12 15:39:14.571067 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-12 15:39:14.571078 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-12 15:39:14.571089 | orchestrator | 2025-07-12 15:39:14.571100 | orchestrator | TASK [common : Check common containers] **************************************** 2025-07-12 15:39:14.571110 | orchestrator | Saturday 12 July 2025 15:37:24 +0000 (0:00:01.881) 0:00:46.938 ********* 2025-07-12 15:39:14.571126 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 15:39:14.571138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 15:39:14.571158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 15:39:14.571170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 15:39:14.571192 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.571203 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 15:39:14.571215 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 15:39:14.571226 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 15:39:14.571242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.571260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.571272 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.571290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.571302 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.571313 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.571404 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.571426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.571447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.571459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.571479 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.571497 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.571514 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:39:14.571530 | orchestrator | 2025-07-12 15:39:14.571545 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-07-12 15:39:14.571561 | orchestrator | Saturday 12 July 2025 15:37:27 +0000 (0:00:02.623) 0:00:49.562 ********* 2025-07-12 15:39:14.571576 | orchestrator | changed: [testbed-manager] 2025-07-12 15:39:14.571592 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:39:14.571609 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:39:14.571624 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:39:14.571641 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:39:14.571656 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:39:14.571673 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:39:14.571689 | orchestrator | 2025-07-12 15:39:14.571705 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-07-12 15:39:14.571720 | orchestrator | Saturday 12 July 2025 15:37:29 +0000 (0:00:01.939) 0:00:51.501 ********* 2025-07-12 15:39:14.571738 | orchestrator | changed: [testbed-manager] 2025-07-12 15:39:14.571753 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:39:14.571769 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:39:14.571785 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:39:14.571795 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:39:14.571804 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:39:14.571813 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:39:14.571823 | orchestrator | 2025-07-12 15:39:14.571832 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-12 15:39:14.571842 | orchestrator | Saturday 12 July 2025 15:37:30 +0000 (0:00:01.410) 0:00:52.911 ********* 2025-07-12 15:39:14.571851 | orchestrator | 2025-07-12 15:39:14.571861 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-12 15:39:14.571870 | orchestrator | Saturday 12 July 2025 15:37:31 +0000 (0:00:00.208) 0:00:53.120 ********* 2025-07-12 15:39:14.571880 | orchestrator | 2025-07-12 15:39:14.571889 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-12 15:39:14.571899 | orchestrator | Saturday 12 July 2025 15:37:31 +0000 (0:00:00.063) 0:00:53.184 ********* 2025-07-12 15:39:14.571908 | orchestrator | 2025-07-12 15:39:14.571917 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-12 15:39:14.571932 | orchestrator | Saturday 12 July 2025 15:37:31 +0000 (0:00:00.072) 0:00:53.256 ********* 2025-07-12 15:39:14.571942 | orchestrator | 2025-07-12 15:39:14.571951 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-12 15:39:14.571961 | orchestrator | Saturday 12 July 2025 15:37:31 +0000 (0:00:00.066) 0:00:53.323 ********* 2025-07-12 15:39:14.571970 | orchestrator | 2025-07-12 15:39:14.571987 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-12 15:39:14.571997 | orchestrator | Saturday 12 July 2025 15:37:31 +0000 (0:00:00.088) 0:00:53.411 ********* 2025-07-12 15:39:14.572006 | orchestrator | 2025-07-12 15:39:14.572016 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-12 15:39:14.572025 | orchestrator | Saturday 12 July 2025 15:37:31 +0000 (0:00:00.069) 0:00:53.481 ********* 2025-07-12 15:39:14.572034 | orchestrator | 2025-07-12 15:39:14.572044 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-07-12 15:39:14.572053 | orchestrator | Saturday 12 July 2025 15:37:31 +0000 (0:00:00.124) 0:00:53.606 ********* 2025-07-12 15:39:14.572070 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:39:14.572080 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:39:14.572089 | orchestrator | changed: [testbed-manager] 2025-07-12 15:39:14.572099 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:39:14.572108 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:39:14.572117 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:39:14.572127 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:39:14.572136 | orchestrator | 2025-07-12 15:39:14.572145 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-07-12 15:39:14.572155 | orchestrator | Saturday 12 July 2025 15:38:13 +0000 (0:00:41.711) 0:01:35.317 ********* 2025-07-12 15:39:14.572164 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:39:14.572173 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:39:14.572183 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:39:14.572192 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:39:14.572201 | orchestrator | changed: [testbed-manager] 2025-07-12 15:39:14.572210 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:39:14.572220 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:39:14.572229 | orchestrator | 2025-07-12 15:39:14.572238 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-07-12 15:39:14.572248 | orchestrator | Saturday 12 July 2025 15:39:00 +0000 (0:00:47.322) 0:02:22.640 ********* 2025-07-12 15:39:14.572257 | orchestrator | ok: [testbed-manager] 2025-07-12 15:39:14.572267 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:39:14.572276 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:39:14.572286 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:39:14.572295 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:39:14.572304 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:39:14.572313 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:39:14.572345 | orchestrator | 2025-07-12 15:39:14.572355 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-07-12 15:39:14.572364 | orchestrator | Saturday 12 July 2025 15:39:02 +0000 (0:00:01.980) 0:02:24.621 ********* 2025-07-12 15:39:14.572374 | orchestrator | changed: [testbed-manager] 2025-07-12 15:39:14.572383 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:39:14.572393 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:39:14.572402 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:39:14.572411 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:39:14.572420 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:39:14.572430 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:39:14.572439 | orchestrator | 2025-07-12 15:39:14.572449 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:39:14.572458 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-12 15:39:14.572468 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-12 15:39:14.572478 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-12 15:39:14.572488 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-12 15:39:14.572504 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-12 15:39:14.572514 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-12 15:39:14.572523 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-12 15:39:14.572532 | orchestrator | 2025-07-12 15:39:14.572542 | orchestrator | 2025-07-12 15:39:14.572551 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:39:14.572561 | orchestrator | Saturday 12 July 2025 15:39:11 +0000 (0:00:09.176) 0:02:33.797 ********* 2025-07-12 15:39:14.572570 | orchestrator | =============================================================================== 2025-07-12 15:39:14.572579 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 47.32s 2025-07-12 15:39:14.572588 | orchestrator | common : Restart fluentd container ------------------------------------- 41.71s 2025-07-12 15:39:14.572598 | orchestrator | common : Restart cron container ----------------------------------------- 9.18s 2025-07-12 15:39:14.572607 | orchestrator | common : Copying over config.json files for services -------------------- 5.60s 2025-07-12 15:39:14.572616 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.09s 2025-07-12 15:39:14.572630 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.76s 2025-07-12 15:39:14.572639 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.25s 2025-07-12 15:39:14.572649 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.83s 2025-07-12 15:39:14.572658 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.12s 2025-07-12 15:39:14.572667 | orchestrator | common : Check common containers ---------------------------------------- 2.62s 2025-07-12 15:39:14.572677 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.54s 2025-07-12 15:39:14.572686 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.42s 2025-07-12 15:39:14.572695 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.20s 2025-07-12 15:39:14.572705 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.19s 2025-07-12 15:39:14.572719 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.98s 2025-07-12 15:39:14.572729 | orchestrator | common : Creating log volume -------------------------------------------- 1.94s 2025-07-12 15:39:14.572738 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 1.88s 2025-07-12 15:39:14.572747 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.47s 2025-07-12 15:39:14.572757 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.41s 2025-07-12 15:39:14.572766 | orchestrator | common : include_tasks -------------------------------------------------- 1.23s 2025-07-12 15:39:14.572930 | orchestrator | 2025-07-12 15:39:14 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:39:14.572945 | orchestrator | 2025-07-12 15:39:14 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:39:14.572955 | orchestrator | 2025-07-12 15:39:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:39:17.609630 | orchestrator | 2025-07-12 15:39:17 | INFO  | Task fcb04d3d-5159-42a1-8b04-73f638ba57e0 is in state STARTED 2025-07-12 15:39:17.609717 | orchestrator | 2025-07-12 15:39:17 | INFO  | Task caf4e926-7171-4cf0-afea-ebb15a1dec9c is in state STARTED 2025-07-12 15:39:17.609731 | orchestrator | 2025-07-12 15:39:17 | INFO  | Task 5a607406-90b3-485e-a428-013e347b8461 is in state STARTED 2025-07-12 15:39:17.612157 | orchestrator | 2025-07-12 15:39:17 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:39:17.612770 | orchestrator | 2025-07-12 15:39:17 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:39:17.613183 | orchestrator | 2025-07-12 15:39:17 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:39:17.613203 | orchestrator | 2025-07-12 15:39:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:39:20.642265 | orchestrator | 2025-07-12 15:39:20 | INFO  | Task fcb04d3d-5159-42a1-8b04-73f638ba57e0 is in state STARTED 2025-07-12 15:39:20.643626 | orchestrator | 2025-07-12 15:39:20 | INFO  | Task caf4e926-7171-4cf0-afea-ebb15a1dec9c is in state STARTED 2025-07-12 15:39:20.644386 | orchestrator | 2025-07-12 15:39:20 | INFO  | Task 5a607406-90b3-485e-a428-013e347b8461 is in state STARTED 2025-07-12 15:39:20.645159 | orchestrator | 2025-07-12 15:39:20 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:39:20.645785 | orchestrator | 2025-07-12 15:39:20 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:39:20.649383 | orchestrator | 2025-07-12 15:39:20 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:39:20.649406 | orchestrator | 2025-07-12 15:39:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:39:23.678638 | orchestrator | 2025-07-12 15:39:23 | INFO  | Task fcb04d3d-5159-42a1-8b04-73f638ba57e0 is in state STARTED 2025-07-12 15:39:23.680049 | orchestrator | 2025-07-12 15:39:23 | INFO  | Task caf4e926-7171-4cf0-afea-ebb15a1dec9c is in state STARTED 2025-07-12 15:39:23.680371 | orchestrator | 2025-07-12 15:39:23 | INFO  | Task 5a607406-90b3-485e-a428-013e347b8461 is in state STARTED 2025-07-12 15:39:23.681301 | orchestrator | 2025-07-12 15:39:23 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:39:23.681716 | orchestrator | 2025-07-12 15:39:23 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:39:23.684380 | orchestrator | 2025-07-12 15:39:23 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:39:23.684450 | orchestrator | 2025-07-12 15:39:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:39:26.759497 | orchestrator | 2025-07-12 15:39:26 | INFO  | Task fcb04d3d-5159-42a1-8b04-73f638ba57e0 is in state STARTED 2025-07-12 15:39:26.759619 | orchestrator | 2025-07-12 15:39:26 | INFO  | Task caf4e926-7171-4cf0-afea-ebb15a1dec9c is in state STARTED 2025-07-12 15:39:26.759643 | orchestrator | 2025-07-12 15:39:26 | INFO  | Task 5a607406-90b3-485e-a428-013e347b8461 is in state STARTED 2025-07-12 15:39:26.760465 | orchestrator | 2025-07-12 15:39:26 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:39:26.760495 | orchestrator | 2025-07-12 15:39:26 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:39:26.764252 | orchestrator | 2025-07-12 15:39:26 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:39:26.764288 | orchestrator | 2025-07-12 15:39:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:39:29.806581 | orchestrator | 2025-07-12 15:39:29 | INFO  | Task fcb04d3d-5159-42a1-8b04-73f638ba57e0 is in state STARTED 2025-07-12 15:39:29.806925 | orchestrator | 2025-07-12 15:39:29 | INFO  | Task caf4e926-7171-4cf0-afea-ebb15a1dec9c is in state STARTED 2025-07-12 15:39:29.807531 | orchestrator | 2025-07-12 15:39:29 | INFO  | Task 5a607406-90b3-485e-a428-013e347b8461 is in state STARTED 2025-07-12 15:39:29.808173 | orchestrator | 2025-07-12 15:39:29 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:39:29.808862 | orchestrator | 2025-07-12 15:39:29 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:39:29.809462 | orchestrator | 2025-07-12 15:39:29 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:39:29.809598 | orchestrator | 2025-07-12 15:39:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:39:32.832517 | orchestrator | 2025-07-12 15:39:32 | INFO  | Task fcb04d3d-5159-42a1-8b04-73f638ba57e0 is in state SUCCESS 2025-07-12 15:39:32.832737 | orchestrator | 2025-07-12 15:39:32 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:39:32.834173 | orchestrator | 2025-07-12 15:39:32 | INFO  | Task caf4e926-7171-4cf0-afea-ebb15a1dec9c is in state STARTED 2025-07-12 15:39:32.834482 | orchestrator | 2025-07-12 15:39:32 | INFO  | Task 5a607406-90b3-485e-a428-013e347b8461 is in state STARTED 2025-07-12 15:39:32.835923 | orchestrator | 2025-07-12 15:39:32 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:39:32.836630 | orchestrator | 2025-07-12 15:39:32 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:39:32.837524 | orchestrator | 2025-07-12 15:39:32 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:39:32.837588 | orchestrator | 2025-07-12 15:39:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:39:35.860141 | orchestrator | 2025-07-12 15:39:35 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:39:35.860486 | orchestrator | 2025-07-12 15:39:35 | INFO  | Task caf4e926-7171-4cf0-afea-ebb15a1dec9c is in state STARTED 2025-07-12 15:39:35.861076 | orchestrator | 2025-07-12 15:39:35 | INFO  | Task 5a607406-90b3-485e-a428-013e347b8461 is in state STARTED 2025-07-12 15:39:35.862356 | orchestrator | 2025-07-12 15:39:35 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:39:35.862952 | orchestrator | 2025-07-12 15:39:35 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:39:35.864513 | orchestrator | 2025-07-12 15:39:35 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:39:35.864985 | orchestrator | 2025-07-12 15:39:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:39:38.908488 | orchestrator | 2025-07-12 15:39:38 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:39:38.908575 | orchestrator | 2025-07-12 15:39:38 | INFO  | Task caf4e926-7171-4cf0-afea-ebb15a1dec9c is in state STARTED 2025-07-12 15:39:38.917625 | orchestrator | 2025-07-12 15:39:38 | INFO  | Task 5a607406-90b3-485e-a428-013e347b8461 is in state STARTED 2025-07-12 15:39:38.918086 | orchestrator | 2025-07-12 15:39:38 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:39:38.918771 | orchestrator | 2025-07-12 15:39:38 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:39:38.924577 | orchestrator | 2025-07-12 15:39:38 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:39:38.924600 | orchestrator | 2025-07-12 15:39:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:39:41.983485 | orchestrator | 2025-07-12 15:39:41 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:39:41.983569 | orchestrator | 2025-07-12 15:39:41 | INFO  | Task caf4e926-7171-4cf0-afea-ebb15a1dec9c is in state STARTED 2025-07-12 15:39:41.983583 | orchestrator | 2025-07-12 15:39:41 | INFO  | Task 5a607406-90b3-485e-a428-013e347b8461 is in state STARTED 2025-07-12 15:39:41.985611 | orchestrator | 2025-07-12 15:39:41 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:39:41.987098 | orchestrator | 2025-07-12 15:39:41 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:39:41.987123 | orchestrator | 2025-07-12 15:39:41 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:39:41.987135 | orchestrator | 2025-07-12 15:39:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:39:45.017348 | orchestrator | 2025-07-12 15:39:45 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:39:45.019761 | orchestrator | 2025-07-12 15:39:45 | INFO  | Task caf4e926-7171-4cf0-afea-ebb15a1dec9c is in state STARTED 2025-07-12 15:39:45.022524 | orchestrator | 2025-07-12 15:39:45 | INFO  | Task 5a607406-90b3-485e-a428-013e347b8461 is in state STARTED 2025-07-12 15:39:45.029201 | orchestrator | 2025-07-12 15:39:45 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:39:45.032340 | orchestrator | 2025-07-12 15:39:45 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:39:45.032712 | orchestrator | 2025-07-12 15:39:45 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:39:45.032739 | orchestrator | 2025-07-12 15:39:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:39:48.069113 | orchestrator | 2025-07-12 15:39:48 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:39:48.069520 | orchestrator | 2025-07-12 15:39:48 | INFO  | Task caf4e926-7171-4cf0-afea-ebb15a1dec9c is in state SUCCESS 2025-07-12 15:39:48.070723 | orchestrator | 2025-07-12 15:39:48.070755 | orchestrator | 2025-07-12 15:39:48.070768 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 15:39:48.070779 | orchestrator | 2025-07-12 15:39:48.070791 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 15:39:48.070803 | orchestrator | Saturday 12 July 2025 15:39:18 +0000 (0:00:00.633) 0:00:00.633 ********* 2025-07-12 15:39:48.070814 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:39:48.070825 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:39:48.070836 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:39:48.070846 | orchestrator | 2025-07-12 15:39:48.070857 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 15:39:48.070868 | orchestrator | Saturday 12 July 2025 15:39:19 +0000 (0:00:00.717) 0:00:01.351 ********* 2025-07-12 15:39:48.070879 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-07-12 15:39:48.070889 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-07-12 15:39:48.070900 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-07-12 15:39:48.070910 | orchestrator | 2025-07-12 15:39:48.070921 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-07-12 15:39:48.070931 | orchestrator | 2025-07-12 15:39:48.070942 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-07-12 15:39:48.070952 | orchestrator | Saturday 12 July 2025 15:39:19 +0000 (0:00:00.529) 0:00:01.880 ********* 2025-07-12 15:39:48.070963 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:39:48.070974 | orchestrator | 2025-07-12 15:39:48.070984 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-07-12 15:39:48.070995 | orchestrator | Saturday 12 July 2025 15:39:20 +0000 (0:00:00.675) 0:00:02.556 ********* 2025-07-12 15:39:48.071005 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-07-12 15:39:48.071016 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-07-12 15:39:48.071049 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-07-12 15:39:48.071060 | orchestrator | 2025-07-12 15:39:48.071071 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-07-12 15:39:48.071081 | orchestrator | Saturday 12 July 2025 15:39:21 +0000 (0:00:00.758) 0:00:03.315 ********* 2025-07-12 15:39:48.071092 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-07-12 15:39:48.071103 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-07-12 15:39:48.071113 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-07-12 15:39:48.071124 | orchestrator | 2025-07-12 15:39:48.071134 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-07-12 15:39:48.071145 | orchestrator | Saturday 12 July 2025 15:39:24 +0000 (0:00:02.783) 0:00:06.098 ********* 2025-07-12 15:39:48.071155 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:39:48.071166 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:39:48.071177 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:39:48.071187 | orchestrator | 2025-07-12 15:39:48.071198 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-07-12 15:39:48.071208 | orchestrator | Saturday 12 July 2025 15:39:27 +0000 (0:00:03.093) 0:00:09.192 ********* 2025-07-12 15:39:48.071219 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:39:48.071229 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:39:48.071240 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:39:48.071285 | orchestrator | 2025-07-12 15:39:48.071335 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:39:48.071350 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:39:48.071364 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:39:48.071377 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:39:48.071389 | orchestrator | 2025-07-12 15:39:48.071401 | orchestrator | 2025-07-12 15:39:48.071413 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:39:48.071426 | orchestrator | Saturday 12 July 2025 15:39:30 +0000 (0:00:02.896) 0:00:12.088 ********* 2025-07-12 15:39:48.071438 | orchestrator | =============================================================================== 2025-07-12 15:39:48.071450 | orchestrator | memcached : Check memcached container ----------------------------------- 3.09s 2025-07-12 15:39:48.071462 | orchestrator | memcached : Restart memcached container --------------------------------- 2.90s 2025-07-12 15:39:48.071475 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.78s 2025-07-12 15:39:48.071487 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.76s 2025-07-12 15:39:48.071500 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.72s 2025-07-12 15:39:48.071513 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.68s 2025-07-12 15:39:48.071525 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.53s 2025-07-12 15:39:48.071538 | orchestrator | 2025-07-12 15:39:48.071550 | orchestrator | 2025-07-12 15:39:48.071562 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 15:39:48.071575 | orchestrator | 2025-07-12 15:39:48.071587 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 15:39:48.071597 | orchestrator | Saturday 12 July 2025 15:39:18 +0000 (0:00:00.441) 0:00:00.441 ********* 2025-07-12 15:39:48.071608 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:39:48.071619 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:39:48.071629 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:39:48.071639 | orchestrator | 2025-07-12 15:39:48.071650 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 15:39:48.071673 | orchestrator | Saturday 12 July 2025 15:39:18 +0000 (0:00:00.448) 0:00:00.890 ********* 2025-07-12 15:39:48.071692 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-07-12 15:39:48.071703 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-07-12 15:39:48.071714 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-07-12 15:39:48.071725 | orchestrator | 2025-07-12 15:39:48.071735 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-07-12 15:39:48.071746 | orchestrator | 2025-07-12 15:39:48.071757 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-07-12 15:39:48.071767 | orchestrator | Saturday 12 July 2025 15:39:19 +0000 (0:00:00.754) 0:00:01.644 ********* 2025-07-12 15:39:48.071778 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:39:48.071789 | orchestrator | 2025-07-12 15:39:48.071800 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-07-12 15:39:48.071810 | orchestrator | Saturday 12 July 2025 15:39:20 +0000 (0:00:00.896) 0:00:02.541 ********* 2025-07-12 15:39:48.071824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 15:39:48.071840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 15:39:48.071852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 15:39:48.071864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 15:39:48.071876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 15:39:48.071902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 15:39:48.071914 | orchestrator | 2025-07-12 15:39:48.071926 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-07-12 15:39:48.071943 | orchestrator | Saturday 12 July 2025 15:39:21 +0000 (0:00:01.401) 0:00:03.942 ********* 2025-07-12 15:39:48.071955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 15:39:48.071968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 15:39:48.071983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 15:39:48.071995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 15:39:48.072006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 15:39:48.072030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 15:39:48.072042 | orchestrator | 2025-07-12 15:39:48.072053 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-07-12 15:39:48.072064 | orchestrator | Saturday 12 July 2025 15:39:25 +0000 (0:00:03.981) 0:00:07.924 ********* 2025-07-12 15:39:48.072076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 15:39:48.072087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 15:39:48.072098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 15:39:48.072114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 15:39:48.072126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 15:39:48.072150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 15:39:48.072162 | orchestrator | 2025-07-12 15:39:48.072172 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-07-12 15:39:48.072184 | orchestrator | Saturday 12 July 2025 15:39:29 +0000 (0:00:03.390) 0:00:11.315 ********* 2025-07-12 15:39:48.072195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 15:39:48.072206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 15:39:48.072217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 15:39:48.072233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 15:39:48.072250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 15:39:48.072268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 15:39:48.072279 | orchestrator | 2025-07-12 15:39:48.072290 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-07-12 15:39:48.072301 | orchestrator | Saturday 12 July 2025 15:39:31 +0000 (0:00:02.121) 0:00:13.436 ********* 2025-07-12 15:39:48.072333 | orchestrator | 2025-07-12 15:39:48.072353 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-07-12 15:39:48.072372 | orchestrator | Saturday 12 July 2025 15:39:31 +0000 (0:00:00.148) 0:00:13.584 ********* 2025-07-12 15:39:48.072390 | orchestrator | 2025-07-12 15:39:48.072402 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-07-12 15:39:48.072412 | orchestrator | Saturday 12 July 2025 15:39:31 +0000 (0:00:00.072) 0:00:13.657 ********* 2025-07-12 15:39:48.072423 | orchestrator | 2025-07-12 15:39:48.072433 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-07-12 15:39:48.072444 | orchestrator | Saturday 12 July 2025 15:39:31 +0000 (0:00:00.075) 0:00:13.733 ********* 2025-07-12 15:39:48.072454 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:39:48.072471 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:39:48.072489 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:39:48.072507 | orchestrator | 2025-07-12 15:39:48.072526 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-07-12 15:39:48.072541 | orchestrator | Saturday 12 July 2025 15:39:40 +0000 (0:00:08.873) 0:00:22.607 ********* 2025-07-12 15:39:48.072552 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:39:48.072562 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:39:48.072608 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:39:48.072621 | orchestrator | 2025-07-12 15:39:48.072632 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:39:48.072643 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:39:48.072654 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:39:48.072665 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:39:48.072676 | orchestrator | 2025-07-12 15:39:48.072686 | orchestrator | 2025-07-12 15:39:48.072697 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:39:48.072707 | orchestrator | Saturday 12 July 2025 15:39:45 +0000 (0:00:04.804) 0:00:27.411 ********* 2025-07-12 15:39:48.072727 | orchestrator | =============================================================================== 2025-07-12 15:39:48.072738 | orchestrator | redis : Restart redis container ----------------------------------------- 8.87s 2025-07-12 15:39:48.072749 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 4.80s 2025-07-12 15:39:48.072759 | orchestrator | redis : Copying over default config.json files -------------------------- 3.98s 2025-07-12 15:39:48.072770 | orchestrator | redis : Copying over redis config files --------------------------------- 3.39s 2025-07-12 15:39:48.072786 | orchestrator | redis : Check redis containers ------------------------------------------ 2.12s 2025-07-12 15:39:48.072797 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.40s 2025-07-12 15:39:48.072808 | orchestrator | redis : include_tasks --------------------------------------------------- 0.90s 2025-07-12 15:39:48.072819 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.75s 2025-07-12 15:39:48.072829 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.45s 2025-07-12 15:39:48.072840 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.30s 2025-07-12 15:39:48.072850 | orchestrator | 2025-07-12 15:39:48 | INFO  | Task 5a607406-90b3-485e-a428-013e347b8461 is in state STARTED 2025-07-12 15:39:48.072951 | orchestrator | 2025-07-12 15:39:48 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:39:48.072966 | orchestrator | 2025-07-12 15:39:48 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:39:48.073406 | orchestrator | 2025-07-12 15:39:48 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:39:48.073429 | orchestrator | 2025-07-12 15:39:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:39:51.114802 | orchestrator | 2025-07-12 15:39:51 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:39:51.114873 | orchestrator | 2025-07-12 15:39:51 | INFO  | Task 5a607406-90b3-485e-a428-013e347b8461 is in state STARTED 2025-07-12 15:39:51.115369 | orchestrator | 2025-07-12 15:39:51 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:39:51.116026 | orchestrator | 2025-07-12 15:39:51 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:39:51.118803 | orchestrator | 2025-07-12 15:39:51 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:39:51.118823 | orchestrator | 2025-07-12 15:39:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:39:54.151532 | orchestrator | 2025-07-12 15:39:54 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:39:54.151727 | orchestrator | 2025-07-12 15:39:54 | INFO  | Task 5a607406-90b3-485e-a428-013e347b8461 is in state STARTED 2025-07-12 15:39:54.152204 | orchestrator | 2025-07-12 15:39:54 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:39:54.152843 | orchestrator | 2025-07-12 15:39:54 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:39:54.153271 | orchestrator | 2025-07-12 15:39:54 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:39:54.153355 | orchestrator | 2025-07-12 15:39:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:39:57.184833 | orchestrator | 2025-07-12 15:39:57 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:39:57.185021 | orchestrator | 2025-07-12 15:39:57 | INFO  | Task 5a607406-90b3-485e-a428-013e347b8461 is in state STARTED 2025-07-12 15:39:57.185404 | orchestrator | 2025-07-12 15:39:57 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:39:57.186065 | orchestrator | 2025-07-12 15:39:57 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:39:57.186466 | orchestrator | 2025-07-12 15:39:57 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:39:57.186488 | orchestrator | 2025-07-12 15:39:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:40:00.223688 | orchestrator | 2025-07-12 15:40:00 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:40:00.223780 | orchestrator | 2025-07-12 15:40:00 | INFO  | Task 5a607406-90b3-485e-a428-013e347b8461 is in state STARTED 2025-07-12 15:40:00.227623 | orchestrator | 2025-07-12 15:40:00 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state STARTED 2025-07-12 15:40:00.230198 | orchestrator | 2025-07-12 15:40:00 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:40:00.230240 | orchestrator | 2025-07-12 15:40:00 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:40:00.230258 | orchestrator | 2025-07-12 15:40:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:40:03.260136 | orchestrator | 2025-07-12 15:40:03 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:40:03.260413 | orchestrator | 2025-07-12 15:40:03 | INFO  | Task df0c959d-dbc7-4102-bfb4-3ca3612852e0 is in state STARTED 2025-07-12 15:40:03.264733 | orchestrator | 2025-07-12 15:40:03 | INFO  | Task 5a607406-90b3-485e-a428-013e347b8461 is in state STARTED 2025-07-12 15:40:03.265980 | orchestrator | 2025-07-12 15:40:03.266064 | orchestrator | 2025-07-12 15:40:03 | INFO  | Task 54fb1c55-e2f7-42c2-96ee-965a3b90169d is in state SUCCESS 2025-07-12 15:40:03.268244 | orchestrator | 2025-07-12 15:40:03.268281 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-07-12 15:40:03.268292 | orchestrator | 2025-07-12 15:40:03.268335 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-07-12 15:40:03.268352 | orchestrator | Saturday 12 July 2025 15:36:38 +0000 (0:00:00.178) 0:00:00.178 ********* 2025-07-12 15:40:03.268363 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:40:03.268374 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:40:03.268385 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:40:03.268395 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:40:03.268405 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:40:03.268416 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:40:03.268426 | orchestrator | 2025-07-12 15:40:03.268437 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-07-12 15:40:03.268448 | orchestrator | Saturday 12 July 2025 15:36:38 +0000 (0:00:00.627) 0:00:00.805 ********* 2025-07-12 15:40:03.268459 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:40:03.268470 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:40:03.268481 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:40:03.268491 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.268502 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:40:03.268512 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:40:03.268523 | orchestrator | 2025-07-12 15:40:03.268534 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-07-12 15:40:03.268545 | orchestrator | Saturday 12 July 2025 15:36:39 +0000 (0:00:00.583) 0:00:01.389 ********* 2025-07-12 15:40:03.268555 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:40:03.268566 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:40:03.268576 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:40:03.268587 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.268597 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:40:03.268608 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:40:03.268623 | orchestrator | 2025-07-12 15:40:03.268634 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-07-12 15:40:03.268675 | orchestrator | Saturday 12 July 2025 15:36:40 +0000 (0:00:00.660) 0:00:02.049 ********* 2025-07-12 15:40:03.268694 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:40:03.268713 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:40:03.268731 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:40:03.268750 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:40:03.268761 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:40:03.268772 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:40:03.268782 | orchestrator | 2025-07-12 15:40:03.268793 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-07-12 15:40:03.268803 | orchestrator | Saturday 12 July 2025 15:36:42 +0000 (0:00:02.008) 0:00:04.058 ********* 2025-07-12 15:40:03.268814 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:40:03.268826 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:40:03.268839 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:40:03.268854 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:40:03.268874 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:40:03.268891 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:40:03.268909 | orchestrator | 2025-07-12 15:40:03.268928 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-07-12 15:40:03.268946 | orchestrator | Saturday 12 July 2025 15:36:43 +0000 (0:00:01.285) 0:00:05.343 ********* 2025-07-12 15:40:03.268965 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:40:03.268984 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:40:03.269002 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:40:03.269022 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:40:03.269041 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:40:03.269060 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:40:03.269080 | orchestrator | 2025-07-12 15:40:03.269098 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-07-12 15:40:03.269117 | orchestrator | Saturday 12 July 2025 15:36:44 +0000 (0:00:00.857) 0:00:06.201 ********* 2025-07-12 15:40:03.269134 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:40:03.269153 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:40:03.269171 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:40:03.269191 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.269270 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:40:03.269293 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:40:03.269346 | orchestrator | 2025-07-12 15:40:03.269358 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-07-12 15:40:03.269369 | orchestrator | Saturday 12 July 2025 15:36:45 +0000 (0:00:00.667) 0:00:06.868 ********* 2025-07-12 15:40:03.269379 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:40:03.269390 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:40:03.269401 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:40:03.269411 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.269422 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:40:03.269432 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:40:03.269443 | orchestrator | 2025-07-12 15:40:03.269454 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-07-12 15:40:03.269465 | orchestrator | Saturday 12 July 2025 15:36:45 +0000 (0:00:00.631) 0:00:07.500 ********* 2025-07-12 15:40:03.269475 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 15:40:03.269486 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 15:40:03.269497 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:40:03.269508 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 15:40:03.269518 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 15:40:03.269529 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:40:03.269540 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 15:40:03.269569 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 15:40:03.269581 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:40:03.269592 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 15:40:03.269616 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 15:40:03.269628 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.269638 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 15:40:03.269649 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 15:40:03.269660 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:40:03.269671 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 15:40:03.269681 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 15:40:03.269692 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:40:03.269703 | orchestrator | 2025-07-12 15:40:03.269713 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-07-12 15:40:03.269724 | orchestrator | Saturday 12 July 2025 15:36:46 +0000 (0:00:00.790) 0:00:08.291 ********* 2025-07-12 15:40:03.269735 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:40:03.269746 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:40:03.269756 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:40:03.269770 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.269789 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:40:03.269808 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:40:03.269827 | orchestrator | 2025-07-12 15:40:03.269845 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-07-12 15:40:03.269865 | orchestrator | Saturday 12 July 2025 15:36:47 +0000 (0:00:01.196) 0:00:09.487 ********* 2025-07-12 15:40:03.269884 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:40:03.269902 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:40:03.269920 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:40:03.269939 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:40:03.269957 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:40:03.269976 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:40:03.269994 | orchestrator | 2025-07-12 15:40:03.270013 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-07-12 15:40:03.270138 | orchestrator | Saturday 12 July 2025 15:36:48 +0000 (0:00:00.894) 0:00:10.382 ********* 2025-07-12 15:40:03.270151 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:40:03.270161 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:40:03.270172 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:40:03.270182 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:40:03.270193 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:40:03.270204 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:40:03.270214 | orchestrator | 2025-07-12 15:40:03.270225 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-07-12 15:40:03.270235 | orchestrator | Saturday 12 July 2025 15:36:54 +0000 (0:00:06.239) 0:00:16.621 ********* 2025-07-12 15:40:03.270246 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:40:03.270257 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:40:03.270267 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:40:03.270277 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.270288 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:40:03.270298 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:40:03.270363 | orchestrator | 2025-07-12 15:40:03.270374 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-07-12 15:40:03.270385 | orchestrator | Saturday 12 July 2025 15:36:56 +0000 (0:00:01.246) 0:00:17.867 ********* 2025-07-12 15:40:03.270395 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:40:03.270406 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:40:03.270416 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:40:03.270438 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.270450 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:40:03.270468 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:40:03.270487 | orchestrator | 2025-07-12 15:40:03.270506 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-07-12 15:40:03.270525 | orchestrator | Saturday 12 July 2025 15:36:57 +0000 (0:00:01.646) 0:00:19.514 ********* 2025-07-12 15:40:03.270542 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:40:03.270559 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:40:03.270576 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:40:03.270594 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:40:03.270611 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:40:03.270627 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:40:03.270644 | orchestrator | 2025-07-12 15:40:03.270661 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-07-12 15:40:03.270678 | orchestrator | Saturday 12 July 2025 15:36:58 +0000 (0:00:00.831) 0:00:20.345 ********* 2025-07-12 15:40:03.270688 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-07-12 15:40:03.270698 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-07-12 15:40:03.270708 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-07-12 15:40:03.270717 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-07-12 15:40:03.270727 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-07-12 15:40:03.270736 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-07-12 15:40:03.270745 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-07-12 15:40:03.270755 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-07-12 15:40:03.270764 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-07-12 15:40:03.270773 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-07-12 15:40:03.270783 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-07-12 15:40:03.270792 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-07-12 15:40:03.270801 | orchestrator | 2025-07-12 15:40:03.270811 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-07-12 15:40:03.270821 | orchestrator | Saturday 12 July 2025 15:37:00 +0000 (0:00:02.002) 0:00:22.347 ********* 2025-07-12 15:40:03.270830 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:40:03.270851 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:40:03.270860 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:40:03.270870 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:40:03.270879 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:40:03.270888 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:40:03.270898 | orchestrator | 2025-07-12 15:40:03.270919 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-07-12 15:40:03.270929 | orchestrator | 2025-07-12 15:40:03.270939 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-07-12 15:40:03.270948 | orchestrator | Saturday 12 July 2025 15:37:02 +0000 (0:00:02.281) 0:00:24.629 ********* 2025-07-12 15:40:03.270957 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:40:03.270967 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:40:03.270976 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:40:03.270986 | orchestrator | 2025-07-12 15:40:03.270995 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-07-12 15:40:03.271004 | orchestrator | Saturday 12 July 2025 15:37:04 +0000 (0:00:02.005) 0:00:26.634 ********* 2025-07-12 15:40:03.271014 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:40:03.271023 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:40:03.271033 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:40:03.271042 | orchestrator | 2025-07-12 15:40:03.271051 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-07-12 15:40:03.271061 | orchestrator | Saturday 12 July 2025 15:37:06 +0000 (0:00:01.382) 0:00:28.017 ********* 2025-07-12 15:40:03.271070 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:40:03.271087 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:40:03.271097 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:40:03.271106 | orchestrator | 2025-07-12 15:40:03.271115 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-07-12 15:40:03.271125 | orchestrator | Saturday 12 July 2025 15:37:07 +0000 (0:00:01.317) 0:00:29.335 ********* 2025-07-12 15:40:03.271134 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:40:03.271144 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:40:03.271153 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:40:03.271162 | orchestrator | 2025-07-12 15:40:03.271172 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-07-12 15:40:03.271181 | orchestrator | Saturday 12 July 2025 15:37:08 +0000 (0:00:01.263) 0:00:30.598 ********* 2025-07-12 15:40:03.271191 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.271200 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:40:03.271209 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:40:03.271219 | orchestrator | 2025-07-12 15:40:03.271228 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-07-12 15:40:03.271237 | orchestrator | Saturday 12 July 2025 15:37:09 +0000 (0:00:00.549) 0:00:31.147 ********* 2025-07-12 15:40:03.271247 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:40:03.271256 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:40:03.271266 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:40:03.271275 | orchestrator | 2025-07-12 15:40:03.271284 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-07-12 15:40:03.271294 | orchestrator | Saturday 12 July 2025 15:37:10 +0000 (0:00:01.028) 0:00:32.175 ********* 2025-07-12 15:40:03.271320 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:40:03.271330 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:40:03.271340 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:40:03.271349 | orchestrator | 2025-07-12 15:40:03.271359 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-07-12 15:40:03.271368 | orchestrator | Saturday 12 July 2025 15:37:12 +0000 (0:00:01.885) 0:00:34.061 ********* 2025-07-12 15:40:03.271378 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:40:03.271387 | orchestrator | 2025-07-12 15:40:03.271397 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-07-12 15:40:03.271406 | orchestrator | Saturday 12 July 2025 15:37:13 +0000 (0:00:01.049) 0:00:35.110 ********* 2025-07-12 15:40:03.271416 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:40:03.271425 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:40:03.271435 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:40:03.271444 | orchestrator | 2025-07-12 15:40:03.271454 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-07-12 15:40:03.271463 | orchestrator | Saturday 12 July 2025 15:37:15 +0000 (0:00:01.886) 0:00:36.996 ********* 2025-07-12 15:40:03.271473 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:40:03.271482 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:40:03.271492 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:40:03.271501 | orchestrator | 2025-07-12 15:40:03.271511 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-07-12 15:40:03.271520 | orchestrator | Saturday 12 July 2025 15:37:16 +0000 (0:00:01.024) 0:00:38.021 ********* 2025-07-12 15:40:03.271530 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:40:03.271539 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:40:03.271549 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:40:03.271558 | orchestrator | 2025-07-12 15:40:03.271567 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-07-12 15:40:03.271577 | orchestrator | Saturday 12 July 2025 15:37:16 +0000 (0:00:00.821) 0:00:38.842 ********* 2025-07-12 15:40:03.271587 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:40:03.271596 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:40:03.271605 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:40:03.271620 | orchestrator | 2025-07-12 15:40:03.271630 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-07-12 15:40:03.271639 | orchestrator | Saturday 12 July 2025 15:37:18 +0000 (0:00:01.604) 0:00:40.447 ********* 2025-07-12 15:40:03.271649 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.271658 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:40:03.271667 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:40:03.271677 | orchestrator | 2025-07-12 15:40:03.271686 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-07-12 15:40:03.271696 | orchestrator | Saturday 12 July 2025 15:37:19 +0000 (0:00:00.427) 0:00:40.875 ********* 2025-07-12 15:40:03.271705 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.271715 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:40:03.271724 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:40:03.271733 | orchestrator | 2025-07-12 15:40:03.271743 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-07-12 15:40:03.271756 | orchestrator | Saturday 12 July 2025 15:37:19 +0000 (0:00:00.504) 0:00:41.380 ********* 2025-07-12 15:40:03.271766 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:40:03.271776 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:40:03.271785 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:40:03.271795 | orchestrator | 2025-07-12 15:40:03.271810 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-07-12 15:40:03.271820 | orchestrator | Saturday 12 July 2025 15:37:21 +0000 (0:00:01.743) 0:00:43.124 ********* 2025-07-12 15:40:03.271830 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-07-12 15:40:03.271840 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-07-12 15:40:03.271850 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-07-12 15:40:03.271860 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-07-12 15:40:03.271869 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-07-12 15:40:03.271879 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-07-12 15:40:03.271888 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-07-12 15:40:03.271898 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-07-12 15:40:03.271907 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-07-12 15:40:03.271917 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-07-12 15:40:03.271926 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-07-12 15:40:03.271936 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-07-12 15:40:03.271945 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:40:03.271955 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:40:03.271964 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:40:03.271974 | orchestrator | 2025-07-12 15:40:03.271984 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-07-12 15:40:03.271993 | orchestrator | Saturday 12 July 2025 15:38:06 +0000 (0:00:44.959) 0:01:28.083 ********* 2025-07-12 15:40:03.272009 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.272018 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:40:03.272028 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:40:03.272037 | orchestrator | 2025-07-12 15:40:03.272046 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-07-12 15:40:03.272056 | orchestrator | Saturday 12 July 2025 15:38:06 +0000 (0:00:00.364) 0:01:28.448 ********* 2025-07-12 15:40:03.272065 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:40:03.272075 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:40:03.272084 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:40:03.272093 | orchestrator | 2025-07-12 15:40:03.272103 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-07-12 15:40:03.272112 | orchestrator | Saturday 12 July 2025 15:38:08 +0000 (0:00:01.522) 0:01:29.970 ********* 2025-07-12 15:40:03.272122 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:40:03.272131 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:40:03.272141 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:40:03.272150 | orchestrator | 2025-07-12 15:40:03.272160 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-07-12 15:40:03.272169 | orchestrator | Saturday 12 July 2025 15:38:09 +0000 (0:00:01.302) 0:01:31.273 ********* 2025-07-12 15:40:03.272179 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:40:03.272188 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:40:03.272198 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:40:03.272207 | orchestrator | 2025-07-12 15:40:03.272217 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-07-12 15:40:03.272226 | orchestrator | Saturday 12 July 2025 15:38:33 +0000 (0:00:23.831) 0:01:55.105 ********* 2025-07-12 15:40:03.272236 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:40:03.272245 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:40:03.272254 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:40:03.272264 | orchestrator | 2025-07-12 15:40:03.272273 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-07-12 15:40:03.272283 | orchestrator | Saturday 12 July 2025 15:38:33 +0000 (0:00:00.731) 0:01:55.837 ********* 2025-07-12 15:40:03.272292 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:40:03.272342 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:40:03.272353 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:40:03.272363 | orchestrator | 2025-07-12 15:40:03.272372 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-07-12 15:40:03.272382 | orchestrator | Saturday 12 July 2025 15:38:34 +0000 (0:00:01.014) 0:01:56.851 ********* 2025-07-12 15:40:03.272391 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:40:03.272401 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:40:03.272410 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:40:03.272419 | orchestrator | 2025-07-12 15:40:03.272430 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-07-12 15:40:03.272438 | orchestrator | Saturday 12 July 2025 15:38:35 +0000 (0:00:00.712) 0:01:57.564 ********* 2025-07-12 15:40:03.272446 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:40:03.272459 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:40:03.272467 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:40:03.272475 | orchestrator | 2025-07-12 15:40:03.272483 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-07-12 15:40:03.272491 | orchestrator | Saturday 12 July 2025 15:38:36 +0000 (0:00:00.710) 0:01:58.274 ********* 2025-07-12 15:40:03.272498 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:40:03.272506 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:40:03.272514 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:40:03.272522 | orchestrator | 2025-07-12 15:40:03.272529 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-07-12 15:40:03.272537 | orchestrator | Saturday 12 July 2025 15:38:36 +0000 (0:00:00.308) 0:01:58.583 ********* 2025-07-12 15:40:03.272545 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:40:03.272557 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:40:03.272565 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:40:03.272573 | orchestrator | 2025-07-12 15:40:03.272581 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-07-12 15:40:03.272589 | orchestrator | Saturday 12 July 2025 15:38:37 +0000 (0:00:01.012) 0:01:59.596 ********* 2025-07-12 15:40:03.272596 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:40:03.272604 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:40:03.272612 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:40:03.272620 | orchestrator | 2025-07-12 15:40:03.272627 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-07-12 15:40:03.272635 | orchestrator | Saturday 12 July 2025 15:38:38 +0000 (0:00:00.689) 0:02:00.285 ********* 2025-07-12 15:40:03.272643 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:40:03.272651 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:40:03.272658 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:40:03.272666 | orchestrator | 2025-07-12 15:40:03.272674 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-07-12 15:40:03.272681 | orchestrator | Saturday 12 July 2025 15:38:39 +0000 (0:00:00.987) 0:02:01.273 ********* 2025-07-12 15:40:03.272689 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:40:03.272697 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:40:03.272705 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:40:03.272712 | orchestrator | 2025-07-12 15:40:03.272720 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-07-12 15:40:03.272728 | orchestrator | Saturday 12 July 2025 15:38:40 +0000 (0:00:00.909) 0:02:02.182 ********* 2025-07-12 15:40:03.272736 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.272743 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:40:03.272751 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:40:03.272759 | orchestrator | 2025-07-12 15:40:03.272766 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-07-12 15:40:03.272774 | orchestrator | Saturday 12 July 2025 15:38:40 +0000 (0:00:00.604) 0:02:02.787 ********* 2025-07-12 15:40:03.272782 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.272790 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:40:03.272797 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:40:03.272805 | orchestrator | 2025-07-12 15:40:03.272813 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-07-12 15:40:03.272821 | orchestrator | Saturday 12 July 2025 15:38:41 +0000 (0:00:00.283) 0:02:03.070 ********* 2025-07-12 15:40:03.272829 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:40:03.272836 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:40:03.272844 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:40:03.272853 | orchestrator | 2025-07-12 15:40:03.272867 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-07-12 15:40:03.272881 | orchestrator | Saturday 12 July 2025 15:38:41 +0000 (0:00:00.672) 0:02:03.743 ********* 2025-07-12 15:40:03.272894 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:40:03.272907 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:40:03.272920 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:40:03.272933 | orchestrator | 2025-07-12 15:40:03.272946 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-07-12 15:40:03.272959 | orchestrator | Saturday 12 July 2025 15:38:42 +0000 (0:00:00.598) 0:02:04.341 ********* 2025-07-12 15:40:03.272967 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-07-12 15:40:03.272974 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-07-12 15:40:03.272982 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-07-12 15:40:03.272990 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-07-12 15:40:03.272997 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-07-12 15:40:03.273011 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-07-12 15:40:03.273019 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-07-12 15:40:03.273026 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-07-12 15:40:03.273034 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-07-12 15:40:03.273042 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-07-12 15:40:03.273049 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-07-12 15:40:03.273057 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-07-12 15:40:03.273065 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-07-12 15:40:03.273073 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-07-12 15:40:03.273085 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-07-12 15:40:03.273094 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-07-12 15:40:03.273101 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-07-12 15:40:03.273109 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-07-12 15:40:03.273117 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-07-12 15:40:03.273125 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-07-12 15:40:03.273132 | orchestrator | 2025-07-12 15:40:03.273140 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-07-12 15:40:03.273148 | orchestrator | 2025-07-12 15:40:03.273156 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-07-12 15:40:03.273163 | orchestrator | Saturday 12 July 2025 15:38:45 +0000 (0:00:03.286) 0:02:07.628 ********* 2025-07-12 15:40:03.273171 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:40:03.273179 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:40:03.273186 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:40:03.273194 | orchestrator | 2025-07-12 15:40:03.273202 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-07-12 15:40:03.273210 | orchestrator | Saturday 12 July 2025 15:38:46 +0000 (0:00:00.325) 0:02:07.953 ********* 2025-07-12 15:40:03.273218 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:40:03.273225 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:40:03.273233 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:40:03.273241 | orchestrator | 2025-07-12 15:40:03.273248 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-07-12 15:40:03.273256 | orchestrator | Saturday 12 July 2025 15:38:46 +0000 (0:00:00.622) 0:02:08.576 ********* 2025-07-12 15:40:03.273264 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:40:03.273272 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:40:03.273279 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:40:03.273287 | orchestrator | 2025-07-12 15:40:03.273295 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-07-12 15:40:03.273317 | orchestrator | Saturday 12 July 2025 15:38:47 +0000 (0:00:00.519) 0:02:09.095 ********* 2025-07-12 15:40:03.273326 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:40:03.273334 | orchestrator | 2025-07-12 15:40:03.273341 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-07-12 15:40:03.273349 | orchestrator | Saturday 12 July 2025 15:38:47 +0000 (0:00:00.443) 0:02:09.538 ********* 2025-07-12 15:40:03.273361 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:40:03.273369 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:40:03.273377 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:40:03.273385 | orchestrator | 2025-07-12 15:40:03.273392 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-07-12 15:40:03.273400 | orchestrator | Saturday 12 July 2025 15:38:47 +0000 (0:00:00.313) 0:02:09.852 ********* 2025-07-12 15:40:03.273408 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:40:03.273415 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:40:03.273423 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:40:03.273430 | orchestrator | 2025-07-12 15:40:03.273438 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-07-12 15:40:03.273446 | orchestrator | Saturday 12 July 2025 15:38:48 +0000 (0:00:00.511) 0:02:10.363 ********* 2025-07-12 15:40:03.273453 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:40:03.273461 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:40:03.273469 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:40:03.273476 | orchestrator | 2025-07-12 15:40:03.273484 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-07-12 15:40:03.273492 | orchestrator | Saturday 12 July 2025 15:38:48 +0000 (0:00:00.326) 0:02:10.690 ********* 2025-07-12 15:40:03.273499 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:40:03.273507 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:40:03.273515 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:40:03.273523 | orchestrator | 2025-07-12 15:40:03.273536 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-07-12 15:40:03.273544 | orchestrator | Saturday 12 July 2025 15:38:49 +0000 (0:00:00.644) 0:02:11.334 ********* 2025-07-12 15:40:03.273552 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:40:03.273560 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:40:03.273567 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:40:03.273575 | orchestrator | 2025-07-12 15:40:03.273583 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-07-12 15:40:03.273590 | orchestrator | Saturday 12 July 2025 15:38:50 +0000 (0:00:01.083) 0:02:12.418 ********* 2025-07-12 15:40:03.273598 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:40:03.273606 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:40:03.273613 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:40:03.273621 | orchestrator | 2025-07-12 15:40:03.273628 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-07-12 15:40:03.273636 | orchestrator | Saturday 12 July 2025 15:38:52 +0000 (0:00:01.569) 0:02:13.988 ********* 2025-07-12 15:40:03.273644 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:40:03.273651 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:40:03.273659 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:40:03.273666 | orchestrator | 2025-07-12 15:40:03.273674 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-07-12 15:40:03.273682 | orchestrator | 2025-07-12 15:40:03.273689 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-07-12 15:40:03.273697 | orchestrator | Saturday 12 July 2025 15:39:04 +0000 (0:00:12.230) 0:02:26.218 ********* 2025-07-12 15:40:03.273708 | orchestrator | ok: [testbed-manager] 2025-07-12 15:40:03.273715 | orchestrator | 2025-07-12 15:40:03.273723 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-07-12 15:40:03.273731 | orchestrator | Saturday 12 July 2025 15:39:05 +0000 (0:00:00.760) 0:02:26.979 ********* 2025-07-12 15:40:03.273743 | orchestrator | changed: [testbed-manager] 2025-07-12 15:40:03.273751 | orchestrator | 2025-07-12 15:40:03.273759 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-07-12 15:40:03.273767 | orchestrator | Saturday 12 July 2025 15:39:05 +0000 (0:00:00.421) 0:02:27.400 ********* 2025-07-12 15:40:03.273775 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-07-12 15:40:03.273782 | orchestrator | 2025-07-12 15:40:03.273790 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-07-12 15:40:03.273802 | orchestrator | Saturday 12 July 2025 15:39:06 +0000 (0:00:00.996) 0:02:28.397 ********* 2025-07-12 15:40:03.273810 | orchestrator | changed: [testbed-manager] 2025-07-12 15:40:03.273818 | orchestrator | 2025-07-12 15:40:03.273825 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-07-12 15:40:03.273833 | orchestrator | Saturday 12 July 2025 15:39:07 +0000 (0:00:00.760) 0:02:29.157 ********* 2025-07-12 15:40:03.273840 | orchestrator | changed: [testbed-manager] 2025-07-12 15:40:03.273848 | orchestrator | 2025-07-12 15:40:03.273856 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-07-12 15:40:03.273863 | orchestrator | Saturday 12 July 2025 15:39:07 +0000 (0:00:00.569) 0:02:29.727 ********* 2025-07-12 15:40:03.273871 | orchestrator | changed: [testbed-manager -> localhost] 2025-07-12 15:40:03.273879 | orchestrator | 2025-07-12 15:40:03.273887 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-07-12 15:40:03.273894 | orchestrator | Saturday 12 July 2025 15:39:09 +0000 (0:00:01.512) 0:02:31.239 ********* 2025-07-12 15:40:03.273902 | orchestrator | changed: [testbed-manager -> localhost] 2025-07-12 15:40:03.273909 | orchestrator | 2025-07-12 15:40:03.273917 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-07-12 15:40:03.273925 | orchestrator | Saturday 12 July 2025 15:39:10 +0000 (0:00:00.789) 0:02:32.029 ********* 2025-07-12 15:40:03.273932 | orchestrator | changed: [testbed-manager] 2025-07-12 15:40:03.273940 | orchestrator | 2025-07-12 15:40:03.273947 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-07-12 15:40:03.273956 | orchestrator | Saturday 12 July 2025 15:39:10 +0000 (0:00:00.430) 0:02:32.459 ********* 2025-07-12 15:40:03.273969 | orchestrator | changed: [testbed-manager] 2025-07-12 15:40:03.273983 | orchestrator | 2025-07-12 15:40:03.273996 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-07-12 15:40:03.274008 | orchestrator | 2025-07-12 15:40:03.274036 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-07-12 15:40:03.274046 | orchestrator | Saturday 12 July 2025 15:39:11 +0000 (0:00:00.437) 0:02:32.897 ********* 2025-07-12 15:40:03.274054 | orchestrator | ok: [testbed-manager] 2025-07-12 15:40:03.274062 | orchestrator | 2025-07-12 15:40:03.274069 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-07-12 15:40:03.274077 | orchestrator | Saturday 12 July 2025 15:39:11 +0000 (0:00:00.152) 0:02:33.049 ********* 2025-07-12 15:40:03.274085 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-07-12 15:40:03.274093 | orchestrator | 2025-07-12 15:40:03.274100 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-07-12 15:40:03.274108 | orchestrator | Saturday 12 July 2025 15:39:11 +0000 (0:00:00.454) 0:02:33.504 ********* 2025-07-12 15:40:03.274116 | orchestrator | ok: [testbed-manager] 2025-07-12 15:40:03.274123 | orchestrator | 2025-07-12 15:40:03.274131 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-07-12 15:40:03.274139 | orchestrator | Saturday 12 July 2025 15:39:12 +0000 (0:00:00.850) 0:02:34.355 ********* 2025-07-12 15:40:03.274147 | orchestrator | ok: [testbed-manager] 2025-07-12 15:40:03.274155 | orchestrator | 2025-07-12 15:40:03.274162 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-07-12 15:40:03.274170 | orchestrator | Saturday 12 July 2025 15:39:14 +0000 (0:00:01.752) 0:02:36.107 ********* 2025-07-12 15:40:03.274178 | orchestrator | changed: [testbed-manager] 2025-07-12 15:40:03.274185 | orchestrator | 2025-07-12 15:40:03.274193 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-07-12 15:40:03.274201 | orchestrator | Saturday 12 July 2025 15:39:15 +0000 (0:00:00.870) 0:02:36.978 ********* 2025-07-12 15:40:03.274208 | orchestrator | ok: [testbed-manager] 2025-07-12 15:40:03.274216 | orchestrator | 2025-07-12 15:40:03.274224 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-07-12 15:40:03.274232 | orchestrator | Saturday 12 July 2025 15:39:15 +0000 (0:00:00.420) 0:02:37.399 ********* 2025-07-12 15:40:03.274245 | orchestrator | changed: [testbed-manager] 2025-07-12 15:40:03.274253 | orchestrator | 2025-07-12 15:40:03.274260 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-07-12 15:40:03.274268 | orchestrator | Saturday 12 July 2025 15:39:20 +0000 (0:00:05.161) 0:02:42.561 ********* 2025-07-12 15:40:03.274276 | orchestrator | changed: [testbed-manager] 2025-07-12 15:40:03.274283 | orchestrator | 2025-07-12 15:40:03.274291 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-07-12 15:40:03.274311 | orchestrator | Saturday 12 July 2025 15:39:30 +0000 (0:00:10.276) 0:02:52.837 ********* 2025-07-12 15:40:03.274320 | orchestrator | ok: [testbed-manager] 2025-07-12 15:40:03.274327 | orchestrator | 2025-07-12 15:40:03.274335 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-07-12 15:40:03.274343 | orchestrator | 2025-07-12 15:40:03.274350 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-07-12 15:40:03.274358 | orchestrator | Saturday 12 July 2025 15:39:31 +0000 (0:00:00.496) 0:02:53.334 ********* 2025-07-12 15:40:03.274366 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:40:03.274374 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:40:03.274381 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:40:03.274389 | orchestrator | 2025-07-12 15:40:03.274401 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-07-12 15:40:03.274420 | orchestrator | Saturday 12 July 2025 15:39:31 +0000 (0:00:00.458) 0:02:53.792 ********* 2025-07-12 15:40:03.274435 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.274449 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:40:03.274463 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:40:03.274477 | orchestrator | 2025-07-12 15:40:03.274499 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-07-12 15:40:03.274514 | orchestrator | Saturday 12 July 2025 15:39:32 +0000 (0:00:00.302) 0:02:54.094 ********* 2025-07-12 15:40:03.274528 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:40:03.274542 | orchestrator | 2025-07-12 15:40:03.274556 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-07-12 15:40:03.274570 | orchestrator | Saturday 12 July 2025 15:39:32 +0000 (0:00:00.457) 0:02:54.552 ********* 2025-07-12 15:40:03.274584 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.274599 | orchestrator | 2025-07-12 15:40:03.274613 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-07-12 15:40:03.274627 | orchestrator | Saturday 12 July 2025 15:39:33 +0000 (0:00:00.423) 0:02:54.976 ********* 2025-07-12 15:40:03.274642 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.274656 | orchestrator | 2025-07-12 15:40:03.274669 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-07-12 15:40:03.274683 | orchestrator | Saturday 12 July 2025 15:39:33 +0000 (0:00:00.226) 0:02:55.203 ********* 2025-07-12 15:40:03.274697 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.274712 | orchestrator | 2025-07-12 15:40:03.274725 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-07-12 15:40:03.274737 | orchestrator | Saturday 12 July 2025 15:39:33 +0000 (0:00:00.190) 0:02:55.393 ********* 2025-07-12 15:40:03.274745 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.274753 | orchestrator | 2025-07-12 15:40:03.274761 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-07-12 15:40:03.274769 | orchestrator | Saturday 12 July 2025 15:39:33 +0000 (0:00:00.213) 0:02:55.607 ********* 2025-07-12 15:40:03.274776 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.274784 | orchestrator | 2025-07-12 15:40:03.274792 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-07-12 15:40:03.274799 | orchestrator | Saturday 12 July 2025 15:39:33 +0000 (0:00:00.184) 0:02:55.791 ********* 2025-07-12 15:40:03.274807 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.274822 | orchestrator | 2025-07-12 15:40:03.274830 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-07-12 15:40:03.274837 | orchestrator | Saturday 12 July 2025 15:39:34 +0000 (0:00:00.172) 0:02:55.964 ********* 2025-07-12 15:40:03.274845 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.274853 | orchestrator | 2025-07-12 15:40:03.274861 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-07-12 15:40:03.274868 | orchestrator | Saturday 12 July 2025 15:39:34 +0000 (0:00:00.158) 0:02:56.122 ********* 2025-07-12 15:40:03.274876 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.274884 | orchestrator | 2025-07-12 15:40:03.274892 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-07-12 15:40:03.274899 | orchestrator | Saturday 12 July 2025 15:39:34 +0000 (0:00:00.178) 0:02:56.301 ********* 2025-07-12 15:40:03.274907 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.274915 | orchestrator | 2025-07-12 15:40:03.274923 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-07-12 15:40:03.274931 | orchestrator | Saturday 12 July 2025 15:39:34 +0000 (0:00:00.228) 0:02:56.529 ********* 2025-07-12 15:40:03.274939 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-07-12 15:40:03.274948 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-07-12 15:40:03.274962 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.274975 | orchestrator | 2025-07-12 15:40:03.274989 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-07-12 15:40:03.275003 | orchestrator | Saturday 12 July 2025 15:39:34 +0000 (0:00:00.298) 0:02:56.828 ********* 2025-07-12 15:40:03.275017 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.275030 | orchestrator | 2025-07-12 15:40:03.275044 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-07-12 15:40:03.275058 | orchestrator | Saturday 12 July 2025 15:39:35 +0000 (0:00:00.184) 0:02:57.012 ********* 2025-07-12 15:40:03.275071 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.275083 | orchestrator | 2025-07-12 15:40:03.275097 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-07-12 15:40:03.275111 | orchestrator | Saturday 12 July 2025 15:39:35 +0000 (0:00:00.224) 0:02:57.237 ********* 2025-07-12 15:40:03.275126 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.275139 | orchestrator | 2025-07-12 15:40:03.275149 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-07-12 15:40:03.275157 | orchestrator | Saturday 12 July 2025 15:39:35 +0000 (0:00:00.582) 0:02:57.820 ********* 2025-07-12 15:40:03.275164 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.275172 | orchestrator | 2025-07-12 15:40:03.275180 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-07-12 15:40:03.275188 | orchestrator | Saturday 12 July 2025 15:39:36 +0000 (0:00:00.180) 0:02:58.001 ********* 2025-07-12 15:40:03.275195 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.275203 | orchestrator | 2025-07-12 15:40:03.275211 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-07-12 15:40:03.275218 | orchestrator | Saturday 12 July 2025 15:39:36 +0000 (0:00:00.190) 0:02:58.191 ********* 2025-07-12 15:40:03.275226 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.275234 | orchestrator | 2025-07-12 15:40:03.275241 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-07-12 15:40:03.275249 | orchestrator | Saturday 12 July 2025 15:39:36 +0000 (0:00:00.215) 0:02:58.407 ********* 2025-07-12 15:40:03.275257 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.275265 | orchestrator | 2025-07-12 15:40:03.275273 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-07-12 15:40:03.275289 | orchestrator | Saturday 12 July 2025 15:39:36 +0000 (0:00:00.221) 0:02:58.629 ********* 2025-07-12 15:40:03.275297 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.275352 | orchestrator | 2025-07-12 15:40:03.275361 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-07-12 15:40:03.275382 | orchestrator | Saturday 12 July 2025 15:39:36 +0000 (0:00:00.222) 0:02:58.851 ********* 2025-07-12 15:40:03.275390 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.275398 | orchestrator | 2025-07-12 15:40:03.275406 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-07-12 15:40:03.275414 | orchestrator | Saturday 12 July 2025 15:39:37 +0000 (0:00:00.184) 0:02:59.036 ********* 2025-07-12 15:40:03.275422 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.275429 | orchestrator | 2025-07-12 15:40:03.275437 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-07-12 15:40:03.275445 | orchestrator | Saturday 12 July 2025 15:39:37 +0000 (0:00:00.163) 0:02:59.200 ********* 2025-07-12 15:40:03.275453 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.275459 | orchestrator | 2025-07-12 15:40:03.275466 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-07-12 15:40:03.275473 | orchestrator | Saturday 12 July 2025 15:39:37 +0000 (0:00:00.199) 0:02:59.399 ********* 2025-07-12 15:40:03.275479 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-07-12 15:40:03.275486 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-07-12 15:40:03.275493 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-07-12 15:40:03.275499 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-07-12 15:40:03.275506 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.275512 | orchestrator | 2025-07-12 15:40:03.275519 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-07-12 15:40:03.275525 | orchestrator | Saturday 12 July 2025 15:39:38 +0000 (0:00:00.553) 0:02:59.953 ********* 2025-07-12 15:40:03.275532 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.275539 | orchestrator | 2025-07-12 15:40:03.275545 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-07-12 15:40:03.275552 | orchestrator | Saturday 12 July 2025 15:39:38 +0000 (0:00:00.239) 0:03:00.193 ********* 2025-07-12 15:40:03.275558 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.275565 | orchestrator | 2025-07-12 15:40:03.275571 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-07-12 15:40:03.275578 | orchestrator | Saturday 12 July 2025 15:39:38 +0000 (0:00:00.168) 0:03:00.361 ********* 2025-07-12 15:40:03.275585 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.275591 | orchestrator | 2025-07-12 15:40:03.275598 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-07-12 15:40:03.275604 | orchestrator | Saturday 12 July 2025 15:39:39 +0000 (0:00:00.498) 0:03:00.860 ********* 2025-07-12 15:40:03.275611 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.275617 | orchestrator | 2025-07-12 15:40:03.275624 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-07-12 15:40:03.275630 | orchestrator | Saturday 12 July 2025 15:39:39 +0000 (0:00:00.190) 0:03:01.050 ********* 2025-07-12 15:40:03.275637 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-07-12 15:40:03.275644 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-07-12 15:40:03.275650 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.275657 | orchestrator | 2025-07-12 15:40:03.275663 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-07-12 15:40:03.275670 | orchestrator | Saturday 12 July 2025 15:39:39 +0000 (0:00:00.259) 0:03:01.310 ********* 2025-07-12 15:40:03.275677 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.275687 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:40:03.275698 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:40:03.275710 | orchestrator | 2025-07-12 15:40:03.275721 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-07-12 15:40:03.275733 | orchestrator | Saturday 12 July 2025 15:39:39 +0000 (0:00:00.373) 0:03:01.683 ********* 2025-07-12 15:40:03.275753 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:40:03.275765 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:40:03.275777 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:40:03.275788 | orchestrator | 2025-07-12 15:40:03.275799 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-07-12 15:40:03.275810 | orchestrator | 2025-07-12 15:40:03.275821 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-07-12 15:40:03.275832 | orchestrator | Saturday 12 July 2025 15:39:40 +0000 (0:00:01.025) 0:03:02.709 ********* 2025-07-12 15:40:03.275839 | orchestrator | ok: [testbed-manager] 2025-07-12 15:40:03.275846 | orchestrator | 2025-07-12 15:40:03.275852 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-07-12 15:40:03.275859 | orchestrator | Saturday 12 July 2025 15:39:41 +0000 (0:00:00.369) 0:03:03.078 ********* 2025-07-12 15:40:03.275865 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-07-12 15:40:03.275871 | orchestrator | 2025-07-12 15:40:03.275878 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-07-12 15:40:03.275884 | orchestrator | Saturday 12 July 2025 15:39:41 +0000 (0:00:00.225) 0:03:03.304 ********* 2025-07-12 15:40:03.275891 | orchestrator | changed: [testbed-manager] 2025-07-12 15:40:03.275897 | orchestrator | 2025-07-12 15:40:03.275904 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-07-12 15:40:03.275910 | orchestrator | 2025-07-12 15:40:03.275917 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-07-12 15:40:03.275923 | orchestrator | Saturday 12 July 2025 15:39:46 +0000 (0:00:05.493) 0:03:08.797 ********* 2025-07-12 15:40:03.275930 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:40:03.275936 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:40:03.275943 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:40:03.275949 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:40:03.275955 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:40:03.275962 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:40:03.275968 | orchestrator | 2025-07-12 15:40:03.275979 | orchestrator | TASK [Manage labels] *********************************************************** 2025-07-12 15:40:03.275986 | orchestrator | Saturday 12 July 2025 15:39:47 +0000 (0:00:00.596) 0:03:09.394 ********* 2025-07-12 15:40:03.275997 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-07-12 15:40:03.276004 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-07-12 15:40:03.276011 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-07-12 15:40:03.276017 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-07-12 15:40:03.276024 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-07-12 15:40:03.276030 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-07-12 15:40:03.276037 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-07-12 15:40:03.276043 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-07-12 15:40:03.276049 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-07-12 15:40:03.276056 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-07-12 15:40:03.276062 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-07-12 15:40:03.276069 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-07-12 15:40:03.276075 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-07-12 15:40:03.276082 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-07-12 15:40:03.276088 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-07-12 15:40:03.276100 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-07-12 15:40:03.276106 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-07-12 15:40:03.276113 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-07-12 15:40:03.276119 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-07-12 15:40:03.276126 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-07-12 15:40:03.276132 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-07-12 15:40:03.276138 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-07-12 15:40:03.276145 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-07-12 15:40:03.276151 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-07-12 15:40:03.276158 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-07-12 15:40:03.276164 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-07-12 15:40:03.276171 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-07-12 15:40:03.276177 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-07-12 15:40:03.276184 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-07-12 15:40:03.276190 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-07-12 15:40:03.276197 | orchestrator | 2025-07-12 15:40:03.276203 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-07-12 15:40:03.276209 | orchestrator | Saturday 12 July 2025 15:40:00 +0000 (0:00:13.355) 0:03:22.750 ********* 2025-07-12 15:40:03.276216 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:40:03.276222 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:40:03.276229 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:40:03.276235 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.276242 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:40:03.276248 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:40:03.276255 | orchestrator | 2025-07-12 15:40:03.276261 | orchestrator | TASK [Manage taints] *********************************************************** 2025-07-12 15:40:03.276268 | orchestrator | Saturday 12 July 2025 15:40:01 +0000 (0:00:00.501) 0:03:23.251 ********* 2025-07-12 15:40:03.276274 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:40:03.276281 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:40:03.276287 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:40:03.276294 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:03.276314 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:40:03.276322 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:40:03.276328 | orchestrator | 2025-07-12 15:40:03.276335 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:40:03.276341 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:40:03.276349 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-07-12 15:40:03.276359 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-07-12 15:40:03.276369 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-07-12 15:40:03.276376 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-07-12 15:40:03.276387 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-07-12 15:40:03.276394 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-07-12 15:40:03.276400 | orchestrator | 2025-07-12 15:40:03.276407 | orchestrator | 2025-07-12 15:40:03.276413 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:40:03.276420 | orchestrator | Saturday 12 July 2025 15:40:01 +0000 (0:00:00.508) 0:03:23.760 ********* 2025-07-12 15:40:03.276426 | orchestrator | =============================================================================== 2025-07-12 15:40:03.276433 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 44.96s 2025-07-12 15:40:03.276440 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 23.83s 2025-07-12 15:40:03.276446 | orchestrator | Manage labels ---------------------------------------------------------- 13.36s 2025-07-12 15:40:03.276453 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 12.23s 2025-07-12 15:40:03.276459 | orchestrator | kubectl : Install required packages ------------------------------------ 10.28s 2025-07-12 15:40:03.276466 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.24s 2025-07-12 15:40:03.276472 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.49s 2025-07-12 15:40:03.276479 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 5.16s 2025-07-12 15:40:03.276485 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.29s 2025-07-12 15:40:03.276492 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 2.28s 2025-07-12 15:40:03.276498 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.01s 2025-07-12 15:40:03.276505 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 2.01s 2025-07-12 15:40:03.276511 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 2.00s 2025-07-12 15:40:03.276518 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 1.89s 2025-07-12 15:40:03.276524 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.89s 2025-07-12 15:40:03.276531 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.75s 2025-07-12 15:40:03.276537 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.74s 2025-07-12 15:40:03.276544 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.65s 2025-07-12 15:40:03.276550 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.60s 2025-07-12 15:40:03.276557 | orchestrator | k3s_agent : Configure the k3s service ----------------------------------- 1.57s 2025-07-12 15:40:03.276564 | orchestrator | 2025-07-12 15:40:03 | INFO  | Task 46033857-3653-46c4-ae4a-077368778a99 is in state STARTED 2025-07-12 15:40:03.276570 | orchestrator | 2025-07-12 15:40:03 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:40:03.276577 | orchestrator | 2025-07-12 15:40:03 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:40:03.276584 | orchestrator | 2025-07-12 15:40:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:40:06.308067 | orchestrator | 2025-07-12 15:40:06 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:40:06.308736 | orchestrator | 2025-07-12 15:40:06 | INFO  | Task df0c959d-dbc7-4102-bfb4-3ca3612852e0 is in state STARTED 2025-07-12 15:40:06.308753 | orchestrator | 2025-07-12 15:40:06 | INFO  | Task 5a607406-90b3-485e-a428-013e347b8461 is in state STARTED 2025-07-12 15:40:06.309138 | orchestrator | 2025-07-12 15:40:06 | INFO  | Task 46033857-3653-46c4-ae4a-077368778a99 is in state STARTED 2025-07-12 15:40:06.310004 | orchestrator | 2025-07-12 15:40:06 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:40:06.310213 | orchestrator | 2025-07-12 15:40:06 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:40:06.310229 | orchestrator | 2025-07-12 15:40:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:40:09.337410 | orchestrator | 2025-07-12 15:40:09 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:40:09.337711 | orchestrator | 2025-07-12 15:40:09 | INFO  | Task df0c959d-dbc7-4102-bfb4-3ca3612852e0 is in state STARTED 2025-07-12 15:40:09.338268 | orchestrator | 2025-07-12 15:40:09 | INFO  | Task 5a607406-90b3-485e-a428-013e347b8461 is in state STARTED 2025-07-12 15:40:09.338504 | orchestrator | 2025-07-12 15:40:09 | INFO  | Task 46033857-3653-46c4-ae4a-077368778a99 is in state SUCCESS 2025-07-12 15:40:09.340705 | orchestrator | 2025-07-12 15:40:09 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:40:09.342268 | orchestrator | 2025-07-12 15:40:09 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:40:09.342293 | orchestrator | 2025-07-12 15:40:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:40:12.386997 | orchestrator | 2025-07-12 15:40:12 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:40:12.387578 | orchestrator | 2025-07-12 15:40:12 | INFO  | Task df0c959d-dbc7-4102-bfb4-3ca3612852e0 is in state STARTED 2025-07-12 15:40:12.388468 | orchestrator | 2025-07-12 15:40:12 | INFO  | Task 5a607406-90b3-485e-a428-013e347b8461 is in state STARTED 2025-07-12 15:40:12.389656 | orchestrator | 2025-07-12 15:40:12 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:40:12.389710 | orchestrator | 2025-07-12 15:40:12 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:40:12.389773 | orchestrator | 2025-07-12 15:40:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:40:15.424462 | orchestrator | 2025-07-12 15:40:15 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:40:15.425441 | orchestrator | 2025-07-12 15:40:15 | INFO  | Task df0c959d-dbc7-4102-bfb4-3ca3612852e0 is in state SUCCESS 2025-07-12 15:40:15.427778 | orchestrator | 2025-07-12 15:40:15 | INFO  | Task 5a607406-90b3-485e-a428-013e347b8461 is in state STARTED 2025-07-12 15:40:15.429109 | orchestrator | 2025-07-12 15:40:15 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:40:15.429696 | orchestrator | 2025-07-12 15:40:15 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:40:15.430432 | orchestrator | 2025-07-12 15:40:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:40:18.463784 | orchestrator | 2025-07-12 15:40:18 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:40:18.465454 | orchestrator | 2025-07-12 15:40:18 | INFO  | Task 5a607406-90b3-485e-a428-013e347b8461 is in state STARTED 2025-07-12 15:40:18.466255 | orchestrator | 2025-07-12 15:40:18 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:40:18.468538 | orchestrator | 2025-07-12 15:40:18 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:40:18.468578 | orchestrator | 2025-07-12 15:40:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:40:21.518725 | orchestrator | 2025-07-12 15:40:21 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:40:21.518842 | orchestrator | 2025-07-12 15:40:21 | INFO  | Task 5a607406-90b3-485e-a428-013e347b8461 is in state STARTED 2025-07-12 15:40:21.518863 | orchestrator | 2025-07-12 15:40:21 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:40:21.519553 | orchestrator | 2025-07-12 15:40:21 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:40:21.519582 | orchestrator | 2025-07-12 15:40:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:40:24.554074 | orchestrator | 2025-07-12 15:40:24 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:40:24.555522 | orchestrator | 2025-07-12 15:40:24 | INFO  | Task 5a607406-90b3-485e-a428-013e347b8461 is in state STARTED 2025-07-12 15:40:24.557584 | orchestrator | 2025-07-12 15:40:24 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:40:24.559103 | orchestrator | 2025-07-12 15:40:24 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:40:24.560706 | orchestrator | 2025-07-12 15:40:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:40:27.594860 | orchestrator | 2025-07-12 15:40:27 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:40:27.596546 | orchestrator | 2025-07-12 15:40:27 | INFO  | Task 5a607406-90b3-485e-a428-013e347b8461 is in state STARTED 2025-07-12 15:40:27.598577 | orchestrator | 2025-07-12 15:40:27 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:40:27.600287 | orchestrator | 2025-07-12 15:40:27 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:40:27.600383 | orchestrator | 2025-07-12 15:40:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:40:30.650453 | orchestrator | 2025-07-12 15:40:30 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:40:30.651493 | orchestrator | 2025-07-12 15:40:30 | INFO  | Task 5a607406-90b3-485e-a428-013e347b8461 is in state SUCCESS 2025-07-12 15:40:30.653960 | orchestrator | 2025-07-12 15:40:30.654006 | orchestrator | 2025-07-12 15:40:30.654069 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-07-12 15:40:30.654084 | orchestrator | 2025-07-12 15:40:30.654096 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-07-12 15:40:30.654107 | orchestrator | Saturday 12 July 2025 15:40:05 +0000 (0:00:00.146) 0:00:00.146 ********* 2025-07-12 15:40:30.654118 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-07-12 15:40:30.654129 | orchestrator | 2025-07-12 15:40:30.654140 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-07-12 15:40:30.654151 | orchestrator | Saturday 12 July 2025 15:40:06 +0000 (0:00:00.737) 0:00:00.884 ********* 2025-07-12 15:40:30.654162 | orchestrator | changed: [testbed-manager] 2025-07-12 15:40:30.654174 | orchestrator | 2025-07-12 15:40:30.654184 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-07-12 15:40:30.654195 | orchestrator | Saturday 12 July 2025 15:40:07 +0000 (0:00:01.137) 0:00:02.021 ********* 2025-07-12 15:40:30.654207 | orchestrator | changed: [testbed-manager] 2025-07-12 15:40:30.654218 | orchestrator | 2025-07-12 15:40:30.654228 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:40:30.654239 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:40:30.654252 | orchestrator | 2025-07-12 15:40:30.654263 | orchestrator | 2025-07-12 15:40:30.654273 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:40:30.654352 | orchestrator | Saturday 12 July 2025 15:40:07 +0000 (0:00:00.470) 0:00:02.491 ********* 2025-07-12 15:40:30.654366 | orchestrator | =============================================================================== 2025-07-12 15:40:30.654377 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.14s 2025-07-12 15:40:30.654388 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.74s 2025-07-12 15:40:30.654398 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.47s 2025-07-12 15:40:30.654409 | orchestrator | 2025-07-12 15:40:30.654419 | orchestrator | 2025-07-12 15:40:30.654430 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-07-12 15:40:30.654440 | orchestrator | 2025-07-12 15:40:30.654451 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-07-12 15:40:30.654461 | orchestrator | Saturday 12 July 2025 15:40:05 +0000 (0:00:00.167) 0:00:00.167 ********* 2025-07-12 15:40:30.654472 | orchestrator | ok: [testbed-manager] 2025-07-12 15:40:30.654483 | orchestrator | 2025-07-12 15:40:30.654494 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-07-12 15:40:30.654504 | orchestrator | Saturday 12 July 2025 15:40:06 +0000 (0:00:00.534) 0:00:00.701 ********* 2025-07-12 15:40:30.654515 | orchestrator | ok: [testbed-manager] 2025-07-12 15:40:30.654526 | orchestrator | 2025-07-12 15:40:30.654538 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-07-12 15:40:30.654550 | orchestrator | Saturday 12 July 2025 15:40:06 +0000 (0:00:00.662) 0:00:01.364 ********* 2025-07-12 15:40:30.654562 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-07-12 15:40:30.654574 | orchestrator | 2025-07-12 15:40:30.654606 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-07-12 15:40:30.654618 | orchestrator | Saturday 12 July 2025 15:40:07 +0000 (0:00:00.757) 0:00:02.121 ********* 2025-07-12 15:40:30.654629 | orchestrator | changed: [testbed-manager] 2025-07-12 15:40:30.654641 | orchestrator | 2025-07-12 15:40:30.654654 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-07-12 15:40:30.654666 | orchestrator | Saturday 12 July 2025 15:40:08 +0000 (0:00:01.090) 0:00:03.212 ********* 2025-07-12 15:40:30.654678 | orchestrator | changed: [testbed-manager] 2025-07-12 15:40:30.654690 | orchestrator | 2025-07-12 15:40:30.654702 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-07-12 15:40:30.654714 | orchestrator | Saturday 12 July 2025 15:40:09 +0000 (0:00:00.732) 0:00:03.944 ********* 2025-07-12 15:40:30.654726 | orchestrator | changed: [testbed-manager -> localhost] 2025-07-12 15:40:30.654738 | orchestrator | 2025-07-12 15:40:30.654751 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-07-12 15:40:30.654762 | orchestrator | Saturday 12 July 2025 15:40:10 +0000 (0:00:01.487) 0:00:05.431 ********* 2025-07-12 15:40:30.654774 | orchestrator | changed: [testbed-manager -> localhost] 2025-07-12 15:40:30.654786 | orchestrator | 2025-07-12 15:40:30.654797 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-07-12 15:40:30.654809 | orchestrator | Saturday 12 July 2025 15:40:11 +0000 (0:00:00.718) 0:00:06.149 ********* 2025-07-12 15:40:30.654820 | orchestrator | ok: [testbed-manager] 2025-07-12 15:40:30.654832 | orchestrator | 2025-07-12 15:40:30.654844 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-07-12 15:40:30.654856 | orchestrator | Saturday 12 July 2025 15:40:12 +0000 (0:00:00.501) 0:00:06.651 ********* 2025-07-12 15:40:30.654868 | orchestrator | ok: [testbed-manager] 2025-07-12 15:40:30.654880 | orchestrator | 2025-07-12 15:40:30.654892 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:40:30.654917 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:40:30.654928 | orchestrator | 2025-07-12 15:40:30.654939 | orchestrator | 2025-07-12 15:40:30.654950 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:40:30.654968 | orchestrator | Saturday 12 July 2025 15:40:12 +0000 (0:00:00.257) 0:00:06.908 ********* 2025-07-12 15:40:30.654979 | orchestrator | =============================================================================== 2025-07-12 15:40:30.654989 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.49s 2025-07-12 15:40:30.655000 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.09s 2025-07-12 15:40:30.655011 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.76s 2025-07-12 15:40:30.655035 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.73s 2025-07-12 15:40:30.655046 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.72s 2025-07-12 15:40:30.655057 | orchestrator | Create .kube directory -------------------------------------------------- 0.66s 2025-07-12 15:40:30.655068 | orchestrator | Get home directory of operator user ------------------------------------- 0.53s 2025-07-12 15:40:30.655078 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.50s 2025-07-12 15:40:30.655089 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.26s 2025-07-12 15:40:30.655099 | orchestrator | 2025-07-12 15:40:30.655110 | orchestrator | 2025-07-12 15:40:30.655121 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 15:40:30.655131 | orchestrator | 2025-07-12 15:40:30.655142 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 15:40:30.655152 | orchestrator | Saturday 12 July 2025 15:39:18 +0000 (0:00:00.479) 0:00:00.479 ********* 2025-07-12 15:40:30.655163 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:40:30.655174 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:40:30.655184 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:40:30.655195 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:40:30.655205 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:40:30.655216 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:40:30.655227 | orchestrator | 2025-07-12 15:40:30.655237 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 15:40:30.655248 | orchestrator | Saturday 12 July 2025 15:39:19 +0000 (0:00:01.115) 0:00:01.595 ********* 2025-07-12 15:40:30.655259 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-12 15:40:30.655270 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-12 15:40:30.655280 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-12 15:40:30.655315 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-12 15:40:30.655326 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-12 15:40:30.655337 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-12 15:40:30.655348 | orchestrator | 2025-07-12 15:40:30.655358 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-07-12 15:40:30.655369 | orchestrator | 2025-07-12 15:40:30.655380 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-07-12 15:40:30.655390 | orchestrator | Saturday 12 July 2025 15:39:20 +0000 (0:00:01.121) 0:00:02.717 ********* 2025-07-12 15:40:30.655402 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:40:30.655414 | orchestrator | 2025-07-12 15:40:30.655425 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-07-12 15:40:30.655435 | orchestrator | Saturday 12 July 2025 15:39:22 +0000 (0:00:01.660) 0:00:04.378 ********* 2025-07-12 15:40:30.655446 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-07-12 15:40:30.655457 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-07-12 15:40:30.655468 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-07-12 15:40:30.655485 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-07-12 15:40:30.655496 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-07-12 15:40:30.655507 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-07-12 15:40:30.655517 | orchestrator | 2025-07-12 15:40:30.655528 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-07-12 15:40:30.655538 | orchestrator | Saturday 12 July 2025 15:39:24 +0000 (0:00:01.821) 0:00:06.199 ********* 2025-07-12 15:40:30.655549 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-07-12 15:40:30.655560 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-07-12 15:40:30.655570 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-07-12 15:40:30.655581 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-07-12 15:40:30.655591 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-07-12 15:40:30.655601 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-07-12 15:40:30.655612 | orchestrator | 2025-07-12 15:40:30.655623 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-07-12 15:40:30.655633 | orchestrator | Saturday 12 July 2025 15:39:26 +0000 (0:00:02.376) 0:00:08.576 ********* 2025-07-12 15:40:30.655644 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-07-12 15:40:30.655655 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-07-12 15:40:30.655666 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:40:30.655676 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-07-12 15:40:30.655687 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:40:30.655697 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-07-12 15:40:30.655708 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:40:30.655724 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-07-12 15:40:30.655734 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:30.655745 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:40:30.655755 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-07-12 15:40:30.655766 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:40:30.655776 | orchestrator | 2025-07-12 15:40:30.655787 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-07-12 15:40:30.655798 | orchestrator | Saturday 12 July 2025 15:39:29 +0000 (0:00:02.352) 0:00:10.929 ********* 2025-07-12 15:40:30.655809 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:40:30.655819 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:40:30.655830 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:40:30.655847 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:30.655858 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:40:30.655869 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:40:30.655879 | orchestrator | 2025-07-12 15:40:30.655890 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-07-12 15:40:30.655900 | orchestrator | Saturday 12 July 2025 15:39:30 +0000 (0:00:01.178) 0:00:12.107 ********* 2025-07-12 15:40:30.655914 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 15:40:30.655933 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 15:40:30.655951 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 15:40:30.655963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 15:40:30.655979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 15:40:30.655998 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 15:40:30.656010 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 15:40:30.656027 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 15:40:30.656038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 15:40:30.656049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 15:40:30.656065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 15:40:30.656083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 15:40:30.656095 | orchestrator | 2025-07-12 15:40:30.656106 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-07-12 15:40:30.656117 | orchestrator | Saturday 12 July 2025 15:39:32 +0000 (0:00:02.537) 0:00:14.645 ********* 2025-07-12 15:40:30.656136 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 15:40:30.656148 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 15:40:30.656159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 15:40:30.656171 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 15:40:30.656853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 15:40:30.656880 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 15:40:30.656902 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 15:40:30.656914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 15:40:30.656925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 15:40:30.656936 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 15:40:30.656957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 15:40:30.656983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 15:40:30.656994 | orchestrator | 2025-07-12 15:40:30.657005 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-07-12 15:40:30.657016 | orchestrator | Saturday 12 July 2025 15:39:36 +0000 (0:00:03.269) 0:00:17.915 ********* 2025-07-12 15:40:30.657027 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:40:30.657038 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:40:30.657049 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:40:30.657059 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:40:30.657070 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:40:30.657080 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:40:30.657091 | orchestrator | 2025-07-12 15:40:30.657102 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-07-12 15:40:30.657112 | orchestrator | Saturday 12 July 2025 15:39:36 +0000 (0:00:00.833) 0:00:18.748 ********* 2025-07-12 15:40:30.657124 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 15:40:30.657135 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 15:40:30.657146 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 15:40:30.657165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 15:40:30.657192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 15:40:30.657204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 15:40:30.657215 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 15:40:30.657226 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 15:40:30.657243 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 15:40:30.657266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 15:40:30.657278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 15:40:30.657349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 15:40:30.657363 | orchestrator | 2025-07-12 15:40:30.657374 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-12 15:40:30.657385 | orchestrator | Saturday 12 July 2025 15:39:39 +0000 (0:00:02.382) 0:00:21.131 ********* 2025-07-12 15:40:30.657396 | orchestrator | 2025-07-12 15:40:30.657407 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-12 15:40:30.657418 | orchestrator | Saturday 12 July 2025 15:39:39 +0000 (0:00:00.124) 0:00:21.255 ********* 2025-07-12 15:40:30.657428 | orchestrator | 2025-07-12 15:40:30.657439 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-12 15:40:30.657449 | orchestrator | Saturday 12 July 2025 15:39:39 +0000 (0:00:00.155) 0:00:21.410 ********* 2025-07-12 15:40:30.657460 | orchestrator | 2025-07-12 15:40:30.657471 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-12 15:40:30.657482 | orchestrator | Saturday 12 July 2025 15:39:39 +0000 (0:00:00.169) 0:00:21.580 ********* 2025-07-12 15:40:30.657492 | orchestrator | 2025-07-12 15:40:30.657525 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-12 15:40:30.657536 | orchestrator | Saturday 12 July 2025 15:39:40 +0000 (0:00:00.433) 0:00:22.014 ********* 2025-07-12 15:40:30.657547 | orchestrator | 2025-07-12 15:40:30.657558 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-12 15:40:30.657568 | orchestrator | Saturday 12 July 2025 15:39:40 +0000 (0:00:00.461) 0:00:22.475 ********* 2025-07-12 15:40:30.657579 | orchestrator | 2025-07-12 15:40:30.657590 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-07-12 15:40:30.657609 | orchestrator | Saturday 12 July 2025 15:39:41 +0000 (0:00:01.133) 0:00:23.609 ********* 2025-07-12 15:40:30.657619 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:40:30.657628 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:40:30.657638 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:40:30.657647 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:40:30.657657 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:40:30.657666 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:40:30.657675 | orchestrator | 2025-07-12 15:40:30.657685 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-07-12 15:40:30.657694 | orchestrator | Saturday 12 July 2025 15:39:52 +0000 (0:00:10.842) 0:00:34.460 ********* 2025-07-12 15:40:30.657704 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:40:30.657714 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:40:30.657723 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:40:30.657732 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:40:30.657742 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:40:30.657751 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:40:30.657760 | orchestrator | 2025-07-12 15:40:30.657770 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-07-12 15:40:30.657786 | orchestrator | Saturday 12 July 2025 15:39:55 +0000 (0:00:03.318) 0:00:37.778 ********* 2025-07-12 15:40:30.657796 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:40:30.657806 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:40:30.657815 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:40:30.657825 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:40:30.657834 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:40:30.657843 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:40:30.657853 | orchestrator | 2025-07-12 15:40:30.657862 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-07-12 15:40:30.657872 | orchestrator | Saturday 12 July 2025 15:40:06 +0000 (0:00:10.487) 0:00:48.266 ********* 2025-07-12 15:40:30.657881 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-07-12 15:40:30.657891 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-07-12 15:40:30.657906 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-07-12 15:40:30.657915 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-07-12 15:40:30.657925 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-07-12 15:40:30.657934 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-07-12 15:40:30.657943 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-07-12 15:40:30.657953 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-07-12 15:40:30.657962 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-07-12 15:40:30.657972 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-07-12 15:40:30.657981 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-07-12 15:40:30.657990 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-07-12 15:40:30.658000 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-12 15:40:30.658009 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-12 15:40:30.658072 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-12 15:40:30.658085 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-12 15:40:30.658095 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-12 15:40:30.658117 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-12 15:40:30.658137 | orchestrator | 2025-07-12 15:40:30.658147 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-07-12 15:40:30.658156 | orchestrator | Saturday 12 July 2025 15:40:15 +0000 (0:00:08.603) 0:00:56.869 ********* 2025-07-12 15:40:30.658166 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-07-12 15:40:30.658176 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:40:30.658185 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-07-12 15:40:30.658194 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:40:30.658204 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-07-12 15:40:30.658213 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:40:30.658223 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-07-12 15:40:30.658232 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-07-12 15:40:30.658242 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-07-12 15:40:30.658251 | orchestrator | 2025-07-12 15:40:30.658261 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-07-12 15:40:30.658270 | orchestrator | Saturday 12 July 2025 15:40:17 +0000 (0:00:02.768) 0:00:59.638 ********* 2025-07-12 15:40:30.658280 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-07-12 15:40:30.658316 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:40:30.658327 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-07-12 15:40:30.658336 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:40:30.658346 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-07-12 15:40:30.658355 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:40:30.658365 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-07-12 15:40:30.658374 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-07-12 15:40:30.658384 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-07-12 15:40:30.658393 | orchestrator | 2025-07-12 15:40:30.658402 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-07-12 15:40:30.658412 | orchestrator | Saturday 12 July 2025 15:40:21 +0000 (0:00:04.177) 0:01:03.816 ********* 2025-07-12 15:40:30.658422 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:40:30.658431 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:40:30.658448 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:40:30.658458 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:40:30.658467 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:40:30.658477 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:40:30.658486 | orchestrator | 2025-07-12 15:40:30.658496 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:40:30.658506 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 15:40:30.658517 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 15:40:30.658526 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 15:40:30.658541 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 15:40:30.658557 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 15:40:30.658567 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 15:40:30.658577 | orchestrator | 2025-07-12 15:40:30.658586 | orchestrator | 2025-07-12 15:40:30.658596 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:40:30.658605 | orchestrator | Saturday 12 July 2025 15:40:29 +0000 (0:00:07.253) 0:01:11.069 ********* 2025-07-12 15:40:30.658615 | orchestrator | =============================================================================== 2025-07-12 15:40:30.658624 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 17.74s 2025-07-12 15:40:30.658633 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.85s 2025-07-12 15:40:30.658643 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.60s 2025-07-12 15:40:30.658652 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.18s 2025-07-12 15:40:30.658661 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 3.32s 2025-07-12 15:40:30.658671 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.27s 2025-07-12 15:40:30.658680 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.77s 2025-07-12 15:40:30.658689 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.54s 2025-07-12 15:40:30.658699 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 2.48s 2025-07-12 15:40:30.658708 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.38s 2025-07-12 15:40:30.658717 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.38s 2025-07-12 15:40:30.658727 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.35s 2025-07-12 15:40:30.658736 | orchestrator | module-load : Load modules ---------------------------------------------- 1.82s 2025-07-12 15:40:30.658745 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.66s 2025-07-12 15:40:30.658755 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.18s 2025-07-12 15:40:30.658764 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.12s 2025-07-12 15:40:30.658773 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.12s 2025-07-12 15:40:30.658782 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.83s 2025-07-12 15:40:30.658792 | orchestrator | 2025-07-12 15:40:30 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:40:30.658802 | orchestrator | 2025-07-12 15:40:30 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:40:30.660165 | orchestrator | 2025-07-12 15:40:30 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:40:30.660260 | orchestrator | 2025-07-12 15:40:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:40:33.702900 | orchestrator | 2025-07-12 15:40:33 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:40:33.703015 | orchestrator | 2025-07-12 15:40:33 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:40:33.703178 | orchestrator | 2025-07-12 15:40:33 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:40:33.703772 | orchestrator | 2025-07-12 15:40:33 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:40:33.703806 | orchestrator | 2025-07-12 15:40:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:40:36.749892 | orchestrator | 2025-07-12 15:40:36 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:40:36.752052 | orchestrator | 2025-07-12 15:40:36 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:40:36.754720 | orchestrator | 2025-07-12 15:40:36 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:40:36.756191 | orchestrator | 2025-07-12 15:40:36 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:40:36.756216 | orchestrator | 2025-07-12 15:40:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:40:39.819807 | orchestrator | 2025-07-12 15:40:39 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:40:39.819942 | orchestrator | 2025-07-12 15:40:39 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:40:39.820101 | orchestrator | 2025-07-12 15:40:39 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:40:39.822950 | orchestrator | 2025-07-12 15:40:39 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:40:39.823047 | orchestrator | 2025-07-12 15:40:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:40:42.868273 | orchestrator | 2025-07-12 15:40:42 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:40:42.868699 | orchestrator | 2025-07-12 15:40:42 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:40:42.869520 | orchestrator | 2025-07-12 15:40:42 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:40:42.870482 | orchestrator | 2025-07-12 15:40:42 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:40:42.870593 | orchestrator | 2025-07-12 15:40:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:40:45.892217 | orchestrator | 2025-07-12 15:40:45 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:40:45.893692 | orchestrator | 2025-07-12 15:40:45 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:40:45.894546 | orchestrator | 2025-07-12 15:40:45 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:40:45.896351 | orchestrator | 2025-07-12 15:40:45 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:40:45.896374 | orchestrator | 2025-07-12 15:40:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:40:48.932110 | orchestrator | 2025-07-12 15:40:48 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:40:48.932596 | orchestrator | 2025-07-12 15:40:48 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:40:48.934375 | orchestrator | 2025-07-12 15:40:48 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:40:48.935245 | orchestrator | 2025-07-12 15:40:48 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:40:48.935267 | orchestrator | 2025-07-12 15:40:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:40:51.978355 | orchestrator | 2025-07-12 15:40:51 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:40:51.978775 | orchestrator | 2025-07-12 15:40:51 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:40:51.980830 | orchestrator | 2025-07-12 15:40:51 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:40:51.983693 | orchestrator | 2025-07-12 15:40:51 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:40:51.983830 | orchestrator | 2025-07-12 15:40:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:40:55.032865 | orchestrator | 2025-07-12 15:40:55 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:40:55.033006 | orchestrator | 2025-07-12 15:40:55 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:40:55.033035 | orchestrator | 2025-07-12 15:40:55 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:40:55.033056 | orchestrator | 2025-07-12 15:40:55 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:40:55.033158 | orchestrator | 2025-07-12 15:40:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:40:58.069272 | orchestrator | 2025-07-12 15:40:58 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:40:58.069879 | orchestrator | 2025-07-12 15:40:58 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:40:58.070550 | orchestrator | 2025-07-12 15:40:58 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:40:58.072778 | orchestrator | 2025-07-12 15:40:58 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:40:58.072808 | orchestrator | 2025-07-12 15:40:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:41:01.125125 | orchestrator | 2025-07-12 15:41:01 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:41:01.129662 | orchestrator | 2025-07-12 15:41:01 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:41:01.130365 | orchestrator | 2025-07-12 15:41:01 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:41:01.132655 | orchestrator | 2025-07-12 15:41:01 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:41:01.132687 | orchestrator | 2025-07-12 15:41:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:41:04.175643 | orchestrator | 2025-07-12 15:41:04 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:41:04.177076 | orchestrator | 2025-07-12 15:41:04 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:41:04.179352 | orchestrator | 2025-07-12 15:41:04 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:41:04.181438 | orchestrator | 2025-07-12 15:41:04 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:41:04.181474 | orchestrator | 2025-07-12 15:41:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:41:07.227597 | orchestrator | 2025-07-12 15:41:07 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:41:07.227924 | orchestrator | 2025-07-12 15:41:07 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:41:07.228566 | orchestrator | 2025-07-12 15:41:07 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:41:07.229660 | orchestrator | 2025-07-12 15:41:07 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:41:07.229763 | orchestrator | 2025-07-12 15:41:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:41:10.273036 | orchestrator | 2025-07-12 15:41:10 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:41:10.274543 | orchestrator | 2025-07-12 15:41:10 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:41:10.276700 | orchestrator | 2025-07-12 15:41:10 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:41:10.278388 | orchestrator | 2025-07-12 15:41:10 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:41:10.278751 | orchestrator | 2025-07-12 15:41:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:41:13.328366 | orchestrator | 2025-07-12 15:41:13 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:41:13.328461 | orchestrator | 2025-07-12 15:41:13 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:41:13.330829 | orchestrator | 2025-07-12 15:41:13 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:41:13.332541 | orchestrator | 2025-07-12 15:41:13 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:41:13.332831 | orchestrator | 2025-07-12 15:41:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:41:16.381780 | orchestrator | 2025-07-12 15:41:16 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:41:16.383776 | orchestrator | 2025-07-12 15:41:16 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:41:16.385195 | orchestrator | 2025-07-12 15:41:16 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:41:16.386921 | orchestrator | 2025-07-12 15:41:16 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:41:16.386953 | orchestrator | 2025-07-12 15:41:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:41:19.437511 | orchestrator | 2025-07-12 15:41:19 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:41:19.438542 | orchestrator | 2025-07-12 15:41:19 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:41:19.439826 | orchestrator | 2025-07-12 15:41:19 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:41:19.441140 | orchestrator | 2025-07-12 15:41:19 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:41:19.441177 | orchestrator | 2025-07-12 15:41:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:41:22.492342 | orchestrator | 2025-07-12 15:41:22 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:41:22.494314 | orchestrator | 2025-07-12 15:41:22 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:41:22.496052 | orchestrator | 2025-07-12 15:41:22 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:41:22.497588 | orchestrator | 2025-07-12 15:41:22 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:41:22.497810 | orchestrator | 2025-07-12 15:41:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:41:25.541050 | orchestrator | 2025-07-12 15:41:25 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:41:25.542963 | orchestrator | 2025-07-12 15:41:25 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:41:25.543963 | orchestrator | 2025-07-12 15:41:25 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:41:25.545564 | orchestrator | 2025-07-12 15:41:25 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:41:25.545657 | orchestrator | 2025-07-12 15:41:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:41:28.600219 | orchestrator | 2025-07-12 15:41:28 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:41:28.602199 | orchestrator | 2025-07-12 15:41:28 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:41:28.605139 | orchestrator | 2025-07-12 15:41:28 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:41:28.607187 | orchestrator | 2025-07-12 15:41:28 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:41:28.609004 | orchestrator | 2025-07-12 15:41:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:41:31.677696 | orchestrator | 2025-07-12 15:41:31 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:41:31.678641 | orchestrator | 2025-07-12 15:41:31 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:41:31.680153 | orchestrator | 2025-07-12 15:41:31 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:41:31.682250 | orchestrator | 2025-07-12 15:41:31 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:41:31.682366 | orchestrator | 2025-07-12 15:41:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:41:34.727835 | orchestrator | 2025-07-12 15:41:34 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:41:34.729685 | orchestrator | 2025-07-12 15:41:34 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:41:34.731113 | orchestrator | 2025-07-12 15:41:34 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:41:34.732951 | orchestrator | 2025-07-12 15:41:34 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:41:34.733062 | orchestrator | 2025-07-12 15:41:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:41:37.781959 | orchestrator | 2025-07-12 15:41:37 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:41:37.783155 | orchestrator | 2025-07-12 15:41:37 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:41:37.783190 | orchestrator | 2025-07-12 15:41:37 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:41:37.785049 | orchestrator | 2025-07-12 15:41:37 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:41:37.785075 | orchestrator | 2025-07-12 15:41:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:41:40.835630 | orchestrator | 2025-07-12 15:41:40 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:41:40.837101 | orchestrator | 2025-07-12 15:41:40 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:41:40.838780 | orchestrator | 2025-07-12 15:41:40 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:41:40.840439 | orchestrator | 2025-07-12 15:41:40 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:41:40.840751 | orchestrator | 2025-07-12 15:41:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:41:43.888050 | orchestrator | 2025-07-12 15:41:43 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:41:43.888598 | orchestrator | 2025-07-12 15:41:43 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:41:43.889901 | orchestrator | 2025-07-12 15:41:43 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:41:43.891252 | orchestrator | 2025-07-12 15:41:43 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:41:43.891443 | orchestrator | 2025-07-12 15:41:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:41:46.941341 | orchestrator | 2025-07-12 15:41:46 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:41:46.941521 | orchestrator | 2025-07-12 15:41:46 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:41:46.942159 | orchestrator | 2025-07-12 15:41:46 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:41:46.942979 | orchestrator | 2025-07-12 15:41:46 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:41:46.943003 | orchestrator | 2025-07-12 15:41:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:41:49.973606 | orchestrator | 2025-07-12 15:41:49 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:41:49.974945 | orchestrator | 2025-07-12 15:41:49 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:41:49.975384 | orchestrator | 2025-07-12 15:41:49 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:41:49.977205 | orchestrator | 2025-07-12 15:41:49 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:41:49.977327 | orchestrator | 2025-07-12 15:41:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:41:53.023922 | orchestrator | 2025-07-12 15:41:53 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:41:53.025243 | orchestrator | 2025-07-12 15:41:53 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:41:53.026235 | orchestrator | 2025-07-12 15:41:53 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:41:53.027169 | orchestrator | 2025-07-12 15:41:53 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:41:53.027246 | orchestrator | 2025-07-12 15:41:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:41:56.080488 | orchestrator | 2025-07-12 15:41:56 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:41:56.080761 | orchestrator | 2025-07-12 15:41:56 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:41:56.081656 | orchestrator | 2025-07-12 15:41:56 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:41:56.085625 | orchestrator | 2025-07-12 15:41:56 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:41:56.085668 | orchestrator | 2025-07-12 15:41:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:41:59.124683 | orchestrator | 2025-07-12 15:41:59 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state STARTED 2025-07-12 15:41:59.124787 | orchestrator | 2025-07-12 15:41:59 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:41:59.124803 | orchestrator | 2025-07-12 15:41:59 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:41:59.126345 | orchestrator | 2025-07-12 15:41:59 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:41:59.126373 | orchestrator | 2025-07-12 15:41:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:42:02.168408 | orchestrator | 2025-07-12 15:42:02.168500 | orchestrator | 2025-07-12 15:42:02.168515 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-07-12 15:42:02.168528 | orchestrator | 2025-07-12 15:42:02.168539 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-07-12 15:42:02.168573 | orchestrator | Saturday 12 July 2025 15:39:36 +0000 (0:00:00.185) 0:00:00.185 ********* 2025-07-12 15:42:02.168618 | orchestrator | ok: [localhost] => { 2025-07-12 15:42:02.168632 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-07-12 15:42:02.168643 | orchestrator | } 2025-07-12 15:42:02.168654 | orchestrator | 2025-07-12 15:42:02.168664 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-07-12 15:42:02.168675 | orchestrator | Saturday 12 July 2025 15:39:36 +0000 (0:00:00.101) 0:00:00.286 ********* 2025-07-12 15:42:02.168687 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-07-12 15:42:02.168698 | orchestrator | ...ignoring 2025-07-12 15:42:02.168709 | orchestrator | 2025-07-12 15:42:02.168720 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-07-12 15:42:02.168730 | orchestrator | Saturday 12 July 2025 15:39:39 +0000 (0:00:02.921) 0:00:03.208 ********* 2025-07-12 15:42:02.168741 | orchestrator | skipping: [localhost] 2025-07-12 15:42:02.168751 | orchestrator | 2025-07-12 15:42:02.168762 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-07-12 15:42:02.168773 | orchestrator | Saturday 12 July 2025 15:39:39 +0000 (0:00:00.047) 0:00:03.256 ********* 2025-07-12 15:42:02.168783 | orchestrator | ok: [localhost] 2025-07-12 15:42:02.168794 | orchestrator | 2025-07-12 15:42:02.168816 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 15:42:02.168828 | orchestrator | 2025-07-12 15:42:02.168839 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 15:42:02.168850 | orchestrator | Saturday 12 July 2025 15:39:39 +0000 (0:00:00.133) 0:00:03.389 ********* 2025-07-12 15:42:02.168861 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:42:02.168872 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:42:02.168882 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:42:02.168893 | orchestrator | 2025-07-12 15:42:02.168904 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 15:42:02.168914 | orchestrator | Saturday 12 July 2025 15:39:40 +0000 (0:00:00.680) 0:00:04.069 ********* 2025-07-12 15:42:02.168925 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-07-12 15:42:02.168937 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-07-12 15:42:02.168948 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-07-12 15:42:02.168959 | orchestrator | 2025-07-12 15:42:02.168970 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-07-12 15:42:02.168980 | orchestrator | 2025-07-12 15:42:02.168991 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-07-12 15:42:02.169002 | orchestrator | Saturday 12 July 2025 15:39:42 +0000 (0:00:02.401) 0:00:06.471 ********* 2025-07-12 15:42:02.169013 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:42:02.169023 | orchestrator | 2025-07-12 15:42:02.169034 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-07-12 15:42:02.169044 | orchestrator | Saturday 12 July 2025 15:39:43 +0000 (0:00:01.257) 0:00:07.728 ********* 2025-07-12 15:42:02.169055 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:42:02.169066 | orchestrator | 2025-07-12 15:42:02.169076 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-07-12 15:42:02.169087 | orchestrator | Saturday 12 July 2025 15:39:44 +0000 (0:00:00.948) 0:00:08.677 ********* 2025-07-12 15:42:02.169097 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:42:02.169108 | orchestrator | 2025-07-12 15:42:02.169119 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-07-12 15:42:02.169129 | orchestrator | Saturday 12 July 2025 15:39:45 +0000 (0:00:00.360) 0:00:09.037 ********* 2025-07-12 15:42:02.169140 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:42:02.169158 | orchestrator | 2025-07-12 15:42:02.169169 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-07-12 15:42:02.169180 | orchestrator | Saturday 12 July 2025 15:39:45 +0000 (0:00:00.508) 0:00:09.546 ********* 2025-07-12 15:42:02.169190 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:42:02.169201 | orchestrator | 2025-07-12 15:42:02.169211 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-07-12 15:42:02.169222 | orchestrator | Saturday 12 July 2025 15:39:46 +0000 (0:00:00.594) 0:00:10.140 ********* 2025-07-12 15:42:02.169232 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:42:02.169243 | orchestrator | 2025-07-12 15:42:02.169271 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-07-12 15:42:02.169282 | orchestrator | Saturday 12 July 2025 15:39:46 +0000 (0:00:00.652) 0:00:10.793 ********* 2025-07-12 15:42:02.169292 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:42:02.169304 | orchestrator | 2025-07-12 15:42:02.169315 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-07-12 15:42:02.169326 | orchestrator | Saturday 12 July 2025 15:39:47 +0000 (0:00:00.953) 0:00:11.747 ********* 2025-07-12 15:42:02.169336 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:42:02.169347 | orchestrator | 2025-07-12 15:42:02.169358 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-07-12 15:42:02.169368 | orchestrator | Saturday 12 July 2025 15:39:48 +0000 (0:00:00.917) 0:00:12.664 ********* 2025-07-12 15:42:02.169379 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:42:02.169389 | orchestrator | 2025-07-12 15:42:02.169400 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-07-12 15:42:02.169411 | orchestrator | Saturday 12 July 2025 15:39:49 +0000 (0:00:00.373) 0:00:13.038 ********* 2025-07-12 15:42:02.169421 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:42:02.169432 | orchestrator | 2025-07-12 15:42:02.169458 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-07-12 15:42:02.169470 | orchestrator | Saturday 12 July 2025 15:39:49 +0000 (0:00:00.359) 0:00:13.397 ********* 2025-07-12 15:42:02.169490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 15:42:02.169507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 15:42:02.169528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 15:42:02.169540 | orchestrator | 2025-07-12 15:42:02.169551 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-07-12 15:42:02.169562 | orchestrator | Saturday 12 July 2025 15:39:51 +0000 (0:00:01.685) 0:00:15.083 ********* 2025-07-12 15:42:02.169583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 15:42:02.169596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 15:42:02.169609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 15:42:02.169627 | orchestrator | 2025-07-12 15:42:02.169638 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-07-12 15:42:02.169649 | orchestrator | Saturday 12 July 2025 15:39:53 +0000 (0:00:02.635) 0:00:17.718 ********* 2025-07-12 15:42:02.169659 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-07-12 15:42:02.169670 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-07-12 15:42:02.169681 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-07-12 15:42:02.169692 | orchestrator | 2025-07-12 15:42:02.169702 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-07-12 15:42:02.169713 | orchestrator | Saturday 12 July 2025 15:39:56 +0000 (0:00:03.072) 0:00:20.790 ********* 2025-07-12 15:42:02.169723 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-07-12 15:42:02.169733 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-07-12 15:42:02.169744 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-07-12 15:42:02.169754 | orchestrator | 2025-07-12 15:42:02.169765 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-07-12 15:42:02.169776 | orchestrator | Saturday 12 July 2025 15:40:01 +0000 (0:00:04.735) 0:00:25.526 ********* 2025-07-12 15:42:02.169786 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-07-12 15:42:02.169797 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-07-12 15:42:02.169807 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-07-12 15:42:02.169818 | orchestrator | 2025-07-12 15:42:02.169835 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-07-12 15:42:02.169846 | orchestrator | Saturday 12 July 2025 15:40:03 +0000 (0:00:01.757) 0:00:27.283 ********* 2025-07-12 15:42:02.169857 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-07-12 15:42:02.169932 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-07-12 15:42:02.169952 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-07-12 15:42:02.169963 | orchestrator | 2025-07-12 15:42:02.169974 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-07-12 15:42:02.169984 | orchestrator | Saturday 12 July 2025 15:40:06 +0000 (0:00:02.874) 0:00:30.158 ********* 2025-07-12 15:42:02.169995 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-07-12 15:42:02.170006 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-07-12 15:42:02.170146 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-07-12 15:42:02.170190 | orchestrator | 2025-07-12 15:42:02.170213 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-07-12 15:42:02.170224 | orchestrator | Saturday 12 July 2025 15:40:08 +0000 (0:00:01.872) 0:00:32.030 ********* 2025-07-12 15:42:02.170235 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-07-12 15:42:02.170283 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-07-12 15:42:02.170295 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-07-12 15:42:02.170306 | orchestrator | 2025-07-12 15:42:02.170317 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-07-12 15:42:02.170328 | orchestrator | Saturday 12 July 2025 15:40:09 +0000 (0:00:01.405) 0:00:33.436 ********* 2025-07-12 15:42:02.170338 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:42:02.170349 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:42:02.170360 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:42:02.170371 | orchestrator | 2025-07-12 15:42:02.170381 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-07-12 15:42:02.170392 | orchestrator | Saturday 12 July 2025 15:40:10 +0000 (0:00:00.621) 0:00:34.058 ********* 2025-07-12 15:42:02.170405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 15:42:02.170419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 15:42:02.170444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 15:42:02.170464 | orchestrator | 2025-07-12 15:42:02.170476 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-07-12 15:42:02.170488 | orchestrator | Saturday 12 July 2025 15:40:11 +0000 (0:00:01.698) 0:00:35.757 ********* 2025-07-12 15:42:02.170499 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:42:02.170509 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:42:02.170520 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:42:02.170531 | orchestrator | 2025-07-12 15:42:02.170546 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-07-12 15:42:02.170557 | orchestrator | Saturday 12 July 2025 15:40:12 +0000 (0:00:00.954) 0:00:36.711 ********* 2025-07-12 15:42:02.170568 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:42:02.170578 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:42:02.170589 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:42:02.170600 | orchestrator | 2025-07-12 15:42:02.170610 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-07-12 15:42:02.170621 | orchestrator | Saturday 12 July 2025 15:40:20 +0000 (0:00:07.733) 0:00:44.444 ********* 2025-07-12 15:42:02.170632 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:42:02.170643 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:42:02.170653 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:42:02.170664 | orchestrator | 2025-07-12 15:42:02.170675 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-07-12 15:42:02.170685 | orchestrator | 2025-07-12 15:42:02.170696 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-07-12 15:42:02.170707 | orchestrator | Saturday 12 July 2025 15:40:21 +0000 (0:00:00.511) 0:00:44.956 ********* 2025-07-12 15:42:02.170717 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:42:02.170729 | orchestrator | 2025-07-12 15:42:02.170740 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-07-12 15:42:02.170752 | orchestrator | Saturday 12 July 2025 15:40:21 +0000 (0:00:00.696) 0:00:45.652 ********* 2025-07-12 15:42:02.170763 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:42:02.170775 | orchestrator | 2025-07-12 15:42:02.170787 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-07-12 15:42:02.170798 | orchestrator | Saturday 12 July 2025 15:40:21 +0000 (0:00:00.251) 0:00:45.903 ********* 2025-07-12 15:42:02.170810 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:42:02.170821 | orchestrator | 2025-07-12 15:42:02.170833 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-07-12 15:42:02.170844 | orchestrator | Saturday 12 July 2025 15:40:23 +0000 (0:00:01.891) 0:00:47.795 ********* 2025-07-12 15:42:02.170855 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:42:02.170867 | orchestrator | 2025-07-12 15:42:02.170878 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-07-12 15:42:02.170890 | orchestrator | 2025-07-12 15:42:02.170902 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-07-12 15:42:02.170913 | orchestrator | Saturday 12 July 2025 15:41:21 +0000 (0:00:57.292) 0:01:45.088 ********* 2025-07-12 15:42:02.170925 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:42:02.170936 | orchestrator | 2025-07-12 15:42:02.170948 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-07-12 15:42:02.170959 | orchestrator | Saturday 12 July 2025 15:41:21 +0000 (0:00:00.667) 0:01:45.756 ********* 2025-07-12 15:42:02.170971 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:42:02.170982 | orchestrator | 2025-07-12 15:42:02.170994 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-07-12 15:42:02.171012 | orchestrator | Saturday 12 July 2025 15:41:22 +0000 (0:00:00.469) 0:01:46.225 ********* 2025-07-12 15:42:02.171023 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:42:02.171035 | orchestrator | 2025-07-12 15:42:02.171046 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-07-12 15:42:02.171058 | orchestrator | Saturday 12 July 2025 15:41:29 +0000 (0:00:06.892) 0:01:53.118 ********* 2025-07-12 15:42:02.171070 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:42:02.171081 | orchestrator | 2025-07-12 15:42:02.171092 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-07-12 15:42:02.171104 | orchestrator | 2025-07-12 15:42:02.171116 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-07-12 15:42:02.171127 | orchestrator | Saturday 12 July 2025 15:41:40 +0000 (0:00:11.070) 0:02:04.188 ********* 2025-07-12 15:42:02.171139 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:42:02.171150 | orchestrator | 2025-07-12 15:42:02.171162 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-07-12 15:42:02.171173 | orchestrator | Saturday 12 July 2025 15:41:40 +0000 (0:00:00.586) 0:02:04.775 ********* 2025-07-12 15:42:02.171185 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:42:02.171196 | orchestrator | 2025-07-12 15:42:02.171208 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-07-12 15:42:02.171227 | orchestrator | Saturday 12 July 2025 15:41:41 +0000 (0:00:00.220) 0:02:04.995 ********* 2025-07-12 15:42:02.171239 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:42:02.171301 | orchestrator | 2025-07-12 15:42:02.171315 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-07-12 15:42:02.171327 | orchestrator | Saturday 12 July 2025 15:41:47 +0000 (0:00:06.729) 0:02:11.725 ********* 2025-07-12 15:42:02.171338 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:42:02.171349 | orchestrator | 2025-07-12 15:42:02.171361 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-07-12 15:42:02.171372 | orchestrator | 2025-07-12 15:42:02.171383 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-07-12 15:42:02.171395 | orchestrator | Saturday 12 July 2025 15:41:57 +0000 (0:00:09.508) 0:02:21.233 ********* 2025-07-12 15:42:02.171406 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:42:02.171417 | orchestrator | 2025-07-12 15:42:02.171429 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-07-12 15:42:02.171440 | orchestrator | Saturday 12 July 2025 15:41:58 +0000 (0:00:00.731) 0:02:21.965 ********* 2025-07-12 15:42:02.171451 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-07-12 15:42:02.171463 | orchestrator | enable_outward_rabbitmq_True 2025-07-12 15:42:02.171474 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-07-12 15:42:02.171485 | orchestrator | outward_rabbitmq_restart 2025-07-12 15:42:02.171497 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:42:02.171508 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:42:02.171519 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:42:02.171531 | orchestrator | 2025-07-12 15:42:02.171542 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-07-12 15:42:02.171553 | orchestrator | skipping: no hosts matched 2025-07-12 15:42:02.171565 | orchestrator | 2025-07-12 15:42:02.171581 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-07-12 15:42:02.171592 | orchestrator | skipping: no hosts matched 2025-07-12 15:42:02.171603 | orchestrator | 2025-07-12 15:42:02.171615 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-07-12 15:42:02.171626 | orchestrator | skipping: no hosts matched 2025-07-12 15:42:02.171638 | orchestrator | 2025-07-12 15:42:02.171649 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:42:02.171661 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-07-12 15:42:02.171679 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-12 15:42:02.171689 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 15:42:02.171700 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 15:42:02.171710 | orchestrator | 2025-07-12 15:42:02.171720 | orchestrator | 2025-07-12 15:42:02.171730 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:42:02.171740 | orchestrator | Saturday 12 July 2025 15:42:00 +0000 (0:00:02.704) 0:02:24.670 ********* 2025-07-12 15:42:02.171750 | orchestrator | =============================================================================== 2025-07-12 15:42:02.171760 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 77.87s 2025-07-12 15:42:02.171771 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 15.51s 2025-07-12 15:42:02.171781 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.73s 2025-07-12 15:42:02.171791 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 4.74s 2025-07-12 15:42:02.171801 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 3.07s 2025-07-12 15:42:02.171811 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.92s 2025-07-12 15:42:02.171821 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.88s 2025-07-12 15:42:02.171831 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.70s 2025-07-12 15:42:02.171841 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.64s 2025-07-12 15:42:02.171851 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.40s 2025-07-12 15:42:02.171861 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.95s 2025-07-12 15:42:02.171871 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.87s 2025-07-12 15:42:02.171881 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.76s 2025-07-12 15:42:02.171891 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.70s 2025-07-12 15:42:02.171901 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.69s 2025-07-12 15:42:02.171911 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.41s 2025-07-12 15:42:02.171921 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.26s 2025-07-12 15:42:02.171932 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.95s 2025-07-12 15:42:02.171942 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.95s 2025-07-12 15:42:02.171952 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.95s 2025-07-12 15:42:02.171968 | orchestrator | 2025-07-12 15:42:02 | INFO  | Task f6e3d5fb-2f04-4dff-badc-36ec1d6d7d79 is in state SUCCESS 2025-07-12 15:42:02.171979 | orchestrator | 2025-07-12 15:42:02 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:42:02.171989 | orchestrator | 2025-07-12 15:42:02 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:42:02.172000 | orchestrator | 2025-07-12 15:42:02 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:42:02.172010 | orchestrator | 2025-07-12 15:42:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:42:05.205207 | orchestrator | 2025-07-12 15:42:05 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:42:05.205759 | orchestrator | 2025-07-12 15:42:05 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:42:05.207074 | orchestrator | 2025-07-12 15:42:05 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:42:05.207336 | orchestrator | 2025-07-12 15:42:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:42:08.253132 | orchestrator | 2025-07-12 15:42:08 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:42:08.253291 | orchestrator | 2025-07-12 15:42:08 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:42:08.254308 | orchestrator | 2025-07-12 15:42:08 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:42:08.254403 | orchestrator | 2025-07-12 15:42:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:42:11.308145 | orchestrator | 2025-07-12 15:42:11 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:42:11.308724 | orchestrator | 2025-07-12 15:42:11 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:42:11.309588 | orchestrator | 2025-07-12 15:42:11 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:42:11.309899 | orchestrator | 2025-07-12 15:42:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:42:14.353425 | orchestrator | 2025-07-12 15:42:14 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:42:14.355478 | orchestrator | 2025-07-12 15:42:14 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:42:14.359816 | orchestrator | 2025-07-12 15:42:14 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:42:14.359891 | orchestrator | 2025-07-12 15:42:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:42:17.397846 | orchestrator | 2025-07-12 15:42:17 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:42:17.397960 | orchestrator | 2025-07-12 15:42:17 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:42:17.399925 | orchestrator | 2025-07-12 15:42:17 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:42:17.399952 | orchestrator | 2025-07-12 15:42:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:42:20.439160 | orchestrator | 2025-07-12 15:42:20 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:42:20.440489 | orchestrator | 2025-07-12 15:42:20 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:42:20.442704 | orchestrator | 2025-07-12 15:42:20 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:42:20.442750 | orchestrator | 2025-07-12 15:42:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:42:23.481746 | orchestrator | 2025-07-12 15:42:23 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:42:23.482664 | orchestrator | 2025-07-12 15:42:23 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:42:23.484118 | orchestrator | 2025-07-12 15:42:23 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:42:23.484138 | orchestrator | 2025-07-12 15:42:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:42:26.521322 | orchestrator | 2025-07-12 15:42:26 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:42:26.524266 | orchestrator | 2025-07-12 15:42:26 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:42:26.524336 | orchestrator | 2025-07-12 15:42:26 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:42:26.524350 | orchestrator | 2025-07-12 15:42:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:42:29.552840 | orchestrator | 2025-07-12 15:42:29 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:42:29.553566 | orchestrator | 2025-07-12 15:42:29 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:42:29.554621 | orchestrator | 2025-07-12 15:42:29 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:42:29.554647 | orchestrator | 2025-07-12 15:42:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:42:32.586190 | orchestrator | 2025-07-12 15:42:32 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:42:32.586387 | orchestrator | 2025-07-12 15:42:32 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:42:32.587873 | orchestrator | 2025-07-12 15:42:32 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:42:32.587903 | orchestrator | 2025-07-12 15:42:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:42:35.620400 | orchestrator | 2025-07-12 15:42:35 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:42:35.621422 | orchestrator | 2025-07-12 15:42:35 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:42:35.623204 | orchestrator | 2025-07-12 15:42:35 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:42:35.623255 | orchestrator | 2025-07-12 15:42:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:42:38.674960 | orchestrator | 2025-07-12 15:42:38 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:42:38.679492 | orchestrator | 2025-07-12 15:42:38 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:42:38.682547 | orchestrator | 2025-07-12 15:42:38 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:42:38.682988 | orchestrator | 2025-07-12 15:42:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:42:41.726913 | orchestrator | 2025-07-12 15:42:41 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:42:41.728122 | orchestrator | 2025-07-12 15:42:41 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:42:41.730100 | orchestrator | 2025-07-12 15:42:41 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:42:41.730402 | orchestrator | 2025-07-12 15:42:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:42:44.778695 | orchestrator | 2025-07-12 15:42:44 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:42:44.780270 | orchestrator | 2025-07-12 15:42:44 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:42:44.782223 | orchestrator | 2025-07-12 15:42:44 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:42:44.782373 | orchestrator | 2025-07-12 15:42:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:42:47.831321 | orchestrator | 2025-07-12 15:42:47 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:42:47.833016 | orchestrator | 2025-07-12 15:42:47 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:42:47.835958 | orchestrator | 2025-07-12 15:42:47 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:42:47.836032 | orchestrator | 2025-07-12 15:42:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:42:50.884346 | orchestrator | 2025-07-12 15:42:50 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:42:50.886257 | orchestrator | 2025-07-12 15:42:50 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:42:50.888033 | orchestrator | 2025-07-12 15:42:50 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:42:50.888536 | orchestrator | 2025-07-12 15:42:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:42:53.934428 | orchestrator | 2025-07-12 15:42:53 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:42:53.935829 | orchestrator | 2025-07-12 15:42:53 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:42:53.937329 | orchestrator | 2025-07-12 15:42:53 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:42:53.937614 | orchestrator | 2025-07-12 15:42:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:42:56.970210 | orchestrator | 2025-07-12 15:42:56 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:42:56.970852 | orchestrator | 2025-07-12 15:42:56 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:42:56.971917 | orchestrator | 2025-07-12 15:42:56 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:42:56.971938 | orchestrator | 2025-07-12 15:42:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:43:00.014009 | orchestrator | 2025-07-12 15:43:00 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:43:00.016015 | orchestrator | 2025-07-12 15:43:00 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:43:00.018140 | orchestrator | 2025-07-12 15:43:00 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state STARTED 2025-07-12 15:43:00.018757 | orchestrator | 2025-07-12 15:43:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:43:03.071402 | orchestrator | 2025-07-12 15:43:03 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:43:03.077172 | orchestrator | 2025-07-12 15:43:03.077263 | orchestrator | 2025-07-12 15:43:03.077278 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 15:43:03.077291 | orchestrator | 2025-07-12 15:43:03.077319 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 15:43:03.077331 | orchestrator | Saturday 12 July 2025 15:40:34 +0000 (0:00:00.176) 0:00:00.176 ********* 2025-07-12 15:43:03.077343 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:43:03.077355 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:43:03.077365 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:43:03.077376 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:43:03.077387 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:43:03.077398 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:43:03.077409 | orchestrator | 2025-07-12 15:43:03.077420 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 15:43:03.077431 | orchestrator | Saturday 12 July 2025 15:40:35 +0000 (0:00:00.930) 0:00:01.107 ********* 2025-07-12 15:43:03.077441 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-07-12 15:43:03.077453 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-07-12 15:43:03.077464 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-07-12 15:43:03.077475 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-07-12 15:43:03.077486 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-07-12 15:43:03.077519 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-07-12 15:43:03.077530 | orchestrator | 2025-07-12 15:43:03.077542 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-07-12 15:43:03.077552 | orchestrator | 2025-07-12 15:43:03.077563 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-07-12 15:43:03.077574 | orchestrator | Saturday 12 July 2025 15:40:36 +0000 (0:00:01.092) 0:00:02.200 ********* 2025-07-12 15:43:03.077586 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:43:03.077598 | orchestrator | 2025-07-12 15:43:03.077609 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-07-12 15:43:03.077619 | orchestrator | Saturday 12 July 2025 15:40:38 +0000 (0:00:01.308) 0:00:03.508 ********* 2025-07-12 15:43:03.077633 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.077646 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.077657 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.077669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.077680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.077707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.077719 | orchestrator | 2025-07-12 15:43:03.077734 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-07-12 15:43:03.077746 | orchestrator | Saturday 12 July 2025 15:40:39 +0000 (0:00:01.231) 0:00:04.740 ********* 2025-07-12 15:43:03.077757 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.077780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.077791 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.077802 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.077813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.077824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.077836 | orchestrator | 2025-07-12 15:43:03.077847 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-07-12 15:43:03.077857 | orchestrator | Saturday 12 July 2025 15:40:40 +0000 (0:00:01.594) 0:00:06.334 ********* 2025-07-12 15:43:03.077869 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.077880 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.077904 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.077923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.077934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.077945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.077957 | orchestrator | 2025-07-12 15:43:03.077968 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-07-12 15:43:03.077979 | orchestrator | Saturday 12 July 2025 15:40:42 +0000 (0:00:01.978) 0:00:08.312 ********* 2025-07-12 15:43:03.077990 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.078001 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.078013 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.078078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.078090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.078122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.078134 | orchestrator | 2025-07-12 15:43:03.078145 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-07-12 15:43:03.078156 | orchestrator | Saturday 12 July 2025 15:40:44 +0000 (0:00:01.510) 0:00:09.822 ********* 2025-07-12 15:43:03.078167 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.078179 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.078190 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.078201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.078212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.078250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.078262 | orchestrator | 2025-07-12 15:43:03.078273 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-07-12 15:43:03.078284 | orchestrator | Saturday 12 July 2025 15:40:45 +0000 (0:00:01.396) 0:00:11.219 ********* 2025-07-12 15:43:03.078302 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:43:03.078315 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:43:03.078325 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:43:03.078336 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:43:03.078346 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:43:03.078357 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:43:03.078367 | orchestrator | 2025-07-12 15:43:03.078378 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-07-12 15:43:03.078389 | orchestrator | Saturday 12 July 2025 15:40:48 +0000 (0:00:02.426) 0:00:13.645 ********* 2025-07-12 15:43:03.078400 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-07-12 15:43:03.078410 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-07-12 15:43:03.078421 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-07-12 15:43:03.078438 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-07-12 15:43:03.078456 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-07-12 15:43:03.078467 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-07-12 15:43:03.078478 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-12 15:43:03.078509 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-12 15:43:03.078520 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-12 15:43:03.078531 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-12 15:43:03.078542 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-12 15:43:03.078553 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-12 15:43:03.078564 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-12 15:43:03.078576 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-12 15:43:03.078587 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-12 15:43:03.078598 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-12 15:43:03.078609 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-12 15:43:03.078620 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-12 15:43:03.078630 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-12 15:43:03.078643 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-12 15:43:03.078654 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-12 15:43:03.078664 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-12 15:43:03.078675 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-12 15:43:03.078686 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-12 15:43:03.078696 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-12 15:43:03.078714 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-12 15:43:03.078725 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-12 15:43:03.078735 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-12 15:43:03.078746 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-12 15:43:03.078757 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-12 15:43:03.078768 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-12 15:43:03.078778 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-12 15:43:03.078789 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-12 15:43:03.078800 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-12 15:43:03.078811 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-12 15:43:03.078822 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-12 15:43:03.078832 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-07-12 15:43:03.078843 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-07-12 15:43:03.078854 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-07-12 15:43:03.078865 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-07-12 15:43:03.078884 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-07-12 15:43:03.078902 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-07-12 15:43:03.078927 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-07-12 15:43:03.078948 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-07-12 15:43:03.078961 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-07-12 15:43:03.078972 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-07-12 15:43:03.078982 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-07-12 15:43:03.078993 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-07-12 15:43:03.079003 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-07-12 15:43:03.079014 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-07-12 15:43:03.079024 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-07-12 15:43:03.079035 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-07-12 15:43:03.079046 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-07-12 15:43:03.079065 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-07-12 15:43:03.079076 | orchestrator | 2025-07-12 15:43:03.079086 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-12 15:43:03.079097 | orchestrator | Saturday 12 July 2025 15:41:07 +0000 (0:00:18.916) 0:00:32.561 ********* 2025-07-12 15:43:03.079108 | orchestrator | 2025-07-12 15:43:03.079119 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-12 15:43:03.079129 | orchestrator | Saturday 12 July 2025 15:41:07 +0000 (0:00:00.061) 0:00:32.623 ********* 2025-07-12 15:43:03.079140 | orchestrator | 2025-07-12 15:43:03.079151 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-12 15:43:03.079161 | orchestrator | Saturday 12 July 2025 15:41:07 +0000 (0:00:00.070) 0:00:32.694 ********* 2025-07-12 15:43:03.079172 | orchestrator | 2025-07-12 15:43:03.079182 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-12 15:43:03.079193 | orchestrator | Saturday 12 July 2025 15:41:07 +0000 (0:00:00.065) 0:00:32.759 ********* 2025-07-12 15:43:03.079203 | orchestrator | 2025-07-12 15:43:03.079213 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-12 15:43:03.079263 | orchestrator | Saturday 12 July 2025 15:41:07 +0000 (0:00:00.063) 0:00:32.822 ********* 2025-07-12 15:43:03.079274 | orchestrator | 2025-07-12 15:43:03.079285 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-12 15:43:03.079295 | orchestrator | Saturday 12 July 2025 15:41:07 +0000 (0:00:00.062) 0:00:32.884 ********* 2025-07-12 15:43:03.079306 | orchestrator | 2025-07-12 15:43:03.079316 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-07-12 15:43:03.079327 | orchestrator | Saturday 12 July 2025 15:41:07 +0000 (0:00:00.064) 0:00:32.949 ********* 2025-07-12 15:43:03.079337 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:43:03.079349 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:43:03.079359 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:43:03.079370 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:43:03.079380 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:43:03.079391 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:43:03.079401 | orchestrator | 2025-07-12 15:43:03.079412 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-07-12 15:43:03.079422 | orchestrator | Saturday 12 July 2025 15:41:09 +0000 (0:00:01.740) 0:00:34.689 ********* 2025-07-12 15:43:03.079433 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:43:03.079444 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:43:03.079454 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:43:03.079465 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:43:03.079476 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:43:03.079486 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:43:03.079496 | orchestrator | 2025-07-12 15:43:03.079507 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-07-12 15:43:03.079518 | orchestrator | 2025-07-12 15:43:03.079529 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-07-12 15:43:03.079539 | orchestrator | Saturday 12 July 2025 15:41:44 +0000 (0:00:35.005) 0:01:09.694 ********* 2025-07-12 15:43:03.079550 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:43:03.079561 | orchestrator | 2025-07-12 15:43:03.079571 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-07-12 15:43:03.079582 | orchestrator | Saturday 12 July 2025 15:41:44 +0000 (0:00:00.511) 0:01:10.206 ********* 2025-07-12 15:43:03.079593 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:43:03.079603 | orchestrator | 2025-07-12 15:43:03.079621 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-07-12 15:43:03.079638 | orchestrator | Saturday 12 July 2025 15:41:45 +0000 (0:00:00.693) 0:01:10.899 ********* 2025-07-12 15:43:03.079662 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:43:03.079673 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:43:03.079684 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:43:03.079694 | orchestrator | 2025-07-12 15:43:03.079705 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-07-12 15:43:03.079716 | orchestrator | Saturday 12 July 2025 15:41:46 +0000 (0:00:00.737) 0:01:11.637 ********* 2025-07-12 15:43:03.079727 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:43:03.079737 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:43:03.079748 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:43:03.079758 | orchestrator | 2025-07-12 15:43:03.079769 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-07-12 15:43:03.079780 | orchestrator | Saturday 12 July 2025 15:41:46 +0000 (0:00:00.349) 0:01:11.986 ********* 2025-07-12 15:43:03.079790 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:43:03.079801 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:43:03.079811 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:43:03.079822 | orchestrator | 2025-07-12 15:43:03.079833 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-07-12 15:43:03.079843 | orchestrator | Saturday 12 July 2025 15:41:46 +0000 (0:00:00.354) 0:01:12.341 ********* 2025-07-12 15:43:03.079854 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:43:03.079864 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:43:03.079875 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:43:03.079885 | orchestrator | 2025-07-12 15:43:03.079896 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-07-12 15:43:03.079906 | orchestrator | Saturday 12 July 2025 15:41:47 +0000 (0:00:00.587) 0:01:12.928 ********* 2025-07-12 15:43:03.079917 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:43:03.079927 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:43:03.079938 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:43:03.079948 | orchestrator | 2025-07-12 15:43:03.079959 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-07-12 15:43:03.079969 | orchestrator | Saturday 12 July 2025 15:41:47 +0000 (0:00:00.441) 0:01:13.369 ********* 2025-07-12 15:43:03.079980 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:43:03.079991 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:43:03.080001 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:43:03.080012 | orchestrator | 2025-07-12 15:43:03.080022 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-07-12 15:43:03.080033 | orchestrator | Saturday 12 July 2025 15:41:48 +0000 (0:00:00.316) 0:01:13.686 ********* 2025-07-12 15:43:03.080044 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:43:03.080055 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:43:03.080065 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:43:03.080076 | orchestrator | 2025-07-12 15:43:03.080087 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-07-12 15:43:03.080097 | orchestrator | Saturday 12 July 2025 15:41:48 +0000 (0:00:00.353) 0:01:14.039 ********* 2025-07-12 15:43:03.080108 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:43:03.080119 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:43:03.080129 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:43:03.080139 | orchestrator | 2025-07-12 15:43:03.080150 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-07-12 15:43:03.080161 | orchestrator | Saturday 12 July 2025 15:41:49 +0000 (0:00:00.540) 0:01:14.580 ********* 2025-07-12 15:43:03.080171 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:43:03.080182 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:43:03.080193 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:43:03.080203 | orchestrator | 2025-07-12 15:43:03.080213 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-07-12 15:43:03.080373 | orchestrator | Saturday 12 July 2025 15:41:49 +0000 (0:00:00.294) 0:01:14.875 ********* 2025-07-12 15:43:03.080390 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:43:03.080411 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:43:03.080420 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:43:03.080430 | orchestrator | 2025-07-12 15:43:03.080439 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-07-12 15:43:03.080449 | orchestrator | Saturday 12 July 2025 15:41:49 +0000 (0:00:00.272) 0:01:15.147 ********* 2025-07-12 15:43:03.080458 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:43:03.080468 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:43:03.080477 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:43:03.080487 | orchestrator | 2025-07-12 15:43:03.080496 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-07-12 15:43:03.080505 | orchestrator | Saturday 12 July 2025 15:41:50 +0000 (0:00:00.280) 0:01:15.428 ********* 2025-07-12 15:43:03.080515 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:43:03.080524 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:43:03.080533 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:43:03.080543 | orchestrator | 2025-07-12 15:43:03.080552 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-07-12 15:43:03.080561 | orchestrator | Saturday 12 July 2025 15:41:50 +0000 (0:00:00.621) 0:01:16.049 ********* 2025-07-12 15:43:03.080571 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:43:03.080580 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:43:03.080589 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:43:03.080599 | orchestrator | 2025-07-12 15:43:03.080608 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-07-12 15:43:03.080618 | orchestrator | Saturday 12 July 2025 15:41:51 +0000 (0:00:00.361) 0:01:16.411 ********* 2025-07-12 15:43:03.080627 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:43:03.080636 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:43:03.080646 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:43:03.080655 | orchestrator | 2025-07-12 15:43:03.080664 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-07-12 15:43:03.080674 | orchestrator | Saturday 12 July 2025 15:41:51 +0000 (0:00:00.439) 0:01:16.850 ********* 2025-07-12 15:43:03.080683 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:43:03.080693 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:43:03.080702 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:43:03.080711 | orchestrator | 2025-07-12 15:43:03.080732 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-07-12 15:43:03.080743 | orchestrator | Saturday 12 July 2025 15:41:51 +0000 (0:00:00.344) 0:01:17.194 ********* 2025-07-12 15:43:03.080758 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:43:03.080768 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:43:03.080777 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:43:03.080786 | orchestrator | 2025-07-12 15:43:03.080796 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-07-12 15:43:03.080805 | orchestrator | Saturday 12 July 2025 15:41:52 +0000 (0:00:00.567) 0:01:17.762 ********* 2025-07-12 15:43:03.080815 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:43:03.080824 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:43:03.080833 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:43:03.080843 | orchestrator | 2025-07-12 15:43:03.080852 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-07-12 15:43:03.080861 | orchestrator | Saturday 12 July 2025 15:41:52 +0000 (0:00:00.338) 0:01:18.101 ********* 2025-07-12 15:43:03.080871 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:43:03.080880 | orchestrator | 2025-07-12 15:43:03.080944 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-07-12 15:43:03.080954 | orchestrator | Saturday 12 July 2025 15:41:53 +0000 (0:00:00.563) 0:01:18.664 ********* 2025-07-12 15:43:03.080963 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:43:03.080973 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:43:03.080983 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:43:03.080999 | orchestrator | 2025-07-12 15:43:03.081009 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-07-12 15:43:03.081018 | orchestrator | Saturday 12 July 2025 15:41:54 +0000 (0:00:00.862) 0:01:19.527 ********* 2025-07-12 15:43:03.081028 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:43:03.081037 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:43:03.081047 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:43:03.081056 | orchestrator | 2025-07-12 15:43:03.081066 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-07-12 15:43:03.081075 | orchestrator | Saturday 12 July 2025 15:41:54 +0000 (0:00:00.463) 0:01:19.990 ********* 2025-07-12 15:43:03.081085 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:43:03.081095 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:43:03.081104 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:43:03.081114 | orchestrator | 2025-07-12 15:43:03.081123 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-07-12 15:43:03.081133 | orchestrator | Saturday 12 July 2025 15:41:54 +0000 (0:00:00.372) 0:01:20.362 ********* 2025-07-12 15:43:03.081143 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:43:03.081152 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:43:03.081161 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:43:03.081171 | orchestrator | 2025-07-12 15:43:03.081180 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-07-12 15:43:03.081190 | orchestrator | Saturday 12 July 2025 15:41:55 +0000 (0:00:00.358) 0:01:20.721 ********* 2025-07-12 15:43:03.081199 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:43:03.081209 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:43:03.081244 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:43:03.081256 | orchestrator | 2025-07-12 15:43:03.081265 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-07-12 15:43:03.081275 | orchestrator | Saturday 12 July 2025 15:41:55 +0000 (0:00:00.588) 0:01:21.310 ********* 2025-07-12 15:43:03.081284 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:43:03.081294 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:43:03.081303 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:43:03.081312 | orchestrator | 2025-07-12 15:43:03.081321 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-07-12 15:43:03.081331 | orchestrator | Saturday 12 July 2025 15:41:56 +0000 (0:00:00.340) 0:01:21.650 ********* 2025-07-12 15:43:03.081340 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:43:03.081350 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:43:03.081359 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:43:03.081368 | orchestrator | 2025-07-12 15:43:03.081378 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-07-12 15:43:03.081387 | orchestrator | Saturday 12 July 2025 15:41:56 +0000 (0:00:00.350) 0:01:22.000 ********* 2025-07-12 15:43:03.081397 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:43:03.081406 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:43:03.081415 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:43:03.081425 | orchestrator | 2025-07-12 15:43:03.081434 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-07-12 15:43:03.081443 | orchestrator | Saturday 12 July 2025 15:41:56 +0000 (0:00:00.331) 0:01:22.332 ********* 2025-07-12 15:43:03.081454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.081466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.081495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.081507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.081519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.081529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.081539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.081549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.081559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.081568 | orchestrator | 2025-07-12 15:43:03.081578 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-07-12 15:43:03.081588 | orchestrator | Saturday 12 July 2025 15:41:58 +0000 (0:00:01.503) 0:01:23.836 ********* 2025-07-12 15:43:03.081598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.081608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.081624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.081639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.081650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.081723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.081741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.081751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.081761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.081770 | orchestrator | 2025-07-12 15:43:03.081780 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-07-12 15:43:03.081789 | orchestrator | Saturday 12 July 2025 15:42:02 +0000 (0:00:04.356) 0:01:28.192 ********* 2025-07-12 15:43:03.081799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.081809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.081825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.081850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.081865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.081875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.081885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.081895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.081905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.081914 | orchestrator | 2025-07-12 15:43:03.081924 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-12 15:43:03.081933 | orchestrator | Saturday 12 July 2025 15:42:04 +0000 (0:00:01.806) 0:01:29.999 ********* 2025-07-12 15:43:03.081943 | orchestrator | 2025-07-12 15:43:03.081952 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-12 15:43:03.081962 | orchestrator | Saturday 12 July 2025 15:42:04 +0000 (0:00:00.063) 0:01:30.063 ********* 2025-07-12 15:43:03.081972 | orchestrator | 2025-07-12 15:43:03.081981 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-12 15:43:03.081996 | orchestrator | Saturday 12 July 2025 15:42:04 +0000 (0:00:00.063) 0:01:30.126 ********* 2025-07-12 15:43:03.082006 | orchestrator | 2025-07-12 15:43:03.082071 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-07-12 15:43:03.082085 | orchestrator | Saturday 12 July 2025 15:42:04 +0000 (0:00:00.062) 0:01:30.189 ********* 2025-07-12 15:43:03.082094 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:43:03.082104 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:43:03.082113 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:43:03.082123 | orchestrator | 2025-07-12 15:43:03.082133 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-07-12 15:43:03.082142 | orchestrator | Saturday 12 July 2025 15:42:07 +0000 (0:00:02.268) 0:01:32.457 ********* 2025-07-12 15:43:03.082152 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:43:03.082161 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:43:03.082170 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:43:03.082179 | orchestrator | 2025-07-12 15:43:03.082189 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-07-12 15:43:03.082198 | orchestrator | Saturday 12 July 2025 15:42:14 +0000 (0:00:07.555) 0:01:40.013 ********* 2025-07-12 15:43:03.082208 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:43:03.082241 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:43:03.082252 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:43:03.082261 | orchestrator | 2025-07-12 15:43:03.082271 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-07-12 15:43:03.082280 | orchestrator | Saturday 12 July 2025 15:42:22 +0000 (0:00:07.380) 0:01:47.393 ********* 2025-07-12 15:43:03.082289 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:43:03.082298 | orchestrator | 2025-07-12 15:43:03.082308 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-07-12 15:43:03.082317 | orchestrator | Saturday 12 July 2025 15:42:22 +0000 (0:00:00.165) 0:01:47.558 ********* 2025-07-12 15:43:03.082326 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:43:03.082336 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:43:03.082345 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:43:03.082355 | orchestrator | 2025-07-12 15:43:03.082371 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-07-12 15:43:03.082381 | orchestrator | Saturday 12 July 2025 15:42:22 +0000 (0:00:00.798) 0:01:48.357 ********* 2025-07-12 15:43:03.082390 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:43:03.082405 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:43:03.082414 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:43:03.082424 | orchestrator | 2025-07-12 15:43:03.082433 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-07-12 15:43:03.082442 | orchestrator | Saturday 12 July 2025 15:42:23 +0000 (0:00:00.884) 0:01:49.242 ********* 2025-07-12 15:43:03.082451 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:43:03.082461 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:43:03.082470 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:43:03.082479 | orchestrator | 2025-07-12 15:43:03.082489 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-07-12 15:43:03.082498 | orchestrator | Saturday 12 July 2025 15:42:24 +0000 (0:00:00.788) 0:01:50.030 ********* 2025-07-12 15:43:03.082507 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:43:03.082517 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:43:03.082526 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:43:03.082535 | orchestrator | 2025-07-12 15:43:03.082545 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-07-12 15:43:03.082554 | orchestrator | Saturday 12 July 2025 15:42:25 +0000 (0:00:00.579) 0:01:50.610 ********* 2025-07-12 15:43:03.082563 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:43:03.082572 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:43:03.082582 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:43:03.082591 | orchestrator | 2025-07-12 15:43:03.082600 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-07-12 15:43:03.082617 | orchestrator | Saturday 12 July 2025 15:42:25 +0000 (0:00:00.671) 0:01:51.282 ********* 2025-07-12 15:43:03.082626 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:43:03.082635 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:43:03.082644 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:43:03.082653 | orchestrator | 2025-07-12 15:43:03.082663 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-07-12 15:43:03.082672 | orchestrator | Saturday 12 July 2025 15:42:26 +0000 (0:00:01.095) 0:01:52.377 ********* 2025-07-12 15:43:03.082682 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:43:03.082691 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:43:03.082700 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:43:03.082709 | orchestrator | 2025-07-12 15:43:03.082719 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-07-12 15:43:03.082728 | orchestrator | Saturday 12 July 2025 15:42:27 +0000 (0:00:00.308) 0:01:52.685 ********* 2025-07-12 15:43:03.082738 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.082748 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.082758 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.082768 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.082778 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.082788 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.082807 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.082818 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.082833 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.082843 | orchestrator | 2025-07-12 15:43:03.082852 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-07-12 15:43:03.082861 | orchestrator | Saturday 12 July 2025 15:42:28 +0000 (0:00:01.448) 0:01:54.134 ********* 2025-07-12 15:43:03.082871 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.082881 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.082891 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.082900 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.082910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.082920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.082935 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.082956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.082966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.082975 | orchestrator | 2025-07-12 15:43:03.082985 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-07-12 15:43:03.082994 | orchestrator | Saturday 12 July 2025 15:42:32 +0000 (0:00:03.766) 0:01:57.900 ********* 2025-07-12 15:43:03.083004 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.083013 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.083023 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.083034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.083051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.083064 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.083073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.083101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.083112 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:43:03.083122 | orchestrator | 2025-07-12 15:43:03.083131 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-12 15:43:03.083140 | orchestrator | Saturday 12 July 2025 15:42:35 +0000 (0:00:03.189) 0:02:01.090 ********* 2025-07-12 15:43:03.083150 | orchestrator | 2025-07-12 15:43:03.083159 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-12 15:43:03.083169 | orchestrator | Saturday 12 July 2025 15:42:35 +0000 (0:00:00.067) 0:02:01.158 ********* 2025-07-12 15:43:03.083178 | orchestrator | 2025-07-12 15:43:03.083188 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-12 15:43:03.083197 | orchestrator | Saturday 12 July 2025 15:42:35 +0000 (0:00:00.071) 0:02:01.230 ********* 2025-07-12 15:43:03.083206 | orchestrator | 2025-07-12 15:43:03.083244 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-07-12 15:43:03.083256 | orchestrator | Saturday 12 July 2025 15:42:35 +0000 (0:00:00.067) 0:02:01.297 ********* 2025-07-12 15:43:03.083265 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:43:03.083274 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:43:03.083284 | orchestrator | 2025-07-12 15:43:03.083293 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-07-12 15:43:03.083303 | orchestrator | Saturday 12 July 2025 15:42:42 +0000 (0:00:06.437) 0:02:07.734 ********* 2025-07-12 15:43:03.083312 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:43:03.083321 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:43:03.083331 | orchestrator | 2025-07-12 15:43:03.083340 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-07-12 15:43:03.083350 | orchestrator | Saturday 12 July 2025 15:42:48 +0000 (0:00:06.162) 0:02:13.896 ********* 2025-07-12 15:43:03.083359 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:43:03.083368 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:43:03.083378 | orchestrator | 2025-07-12 15:43:03.083387 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-07-12 15:43:03.083396 | orchestrator | Saturday 12 July 2025 15:42:54 +0000 (0:00:06.221) 0:02:20.118 ********* 2025-07-12 15:43:03.083406 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:43:03.083415 | orchestrator | 2025-07-12 15:43:03.083424 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-07-12 15:43:03.083434 | orchestrator | Saturday 12 July 2025 15:42:54 +0000 (0:00:00.154) 0:02:20.273 ********* 2025-07-12 15:43:03.083443 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:43:03.083453 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:43:03.083462 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:43:03.083471 | orchestrator | 2025-07-12 15:43:03.083481 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-07-12 15:43:03.083490 | orchestrator | Saturday 12 July 2025 15:42:55 +0000 (0:00:01.102) 0:02:21.375 ********* 2025-07-12 15:43:03.083499 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:43:03.083509 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:43:03.083518 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:43:03.083528 | orchestrator | 2025-07-12 15:43:03.083544 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-07-12 15:43:03.083553 | orchestrator | Saturday 12 July 2025 15:42:56 +0000 (0:00:00.644) 0:02:22.020 ********* 2025-07-12 15:43:03.083563 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:43:03.083572 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:43:03.083581 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:43:03.083591 | orchestrator | 2025-07-12 15:43:03.083600 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-07-12 15:43:03.083609 | orchestrator | Saturday 12 July 2025 15:42:57 +0000 (0:00:00.702) 0:02:22.722 ********* 2025-07-12 15:43:03.083618 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:43:03.083628 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:43:03.083637 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:43:03.083646 | orchestrator | 2025-07-12 15:43:03.083656 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-07-12 15:43:03.083665 | orchestrator | Saturday 12 July 2025 15:42:57 +0000 (0:00:00.659) 0:02:23.382 ********* 2025-07-12 15:43:03.083674 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:43:03.083683 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:43:03.083693 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:43:03.083702 | orchestrator | 2025-07-12 15:43:03.083711 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-07-12 15:43:03.083721 | orchestrator | Saturday 12 July 2025 15:42:58 +0000 (0:00:00.963) 0:02:24.345 ********* 2025-07-12 15:43:03.083730 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:43:03.083739 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:43:03.083749 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:43:03.083758 | orchestrator | 2025-07-12 15:43:03.083767 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:43:03.083777 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-07-12 15:43:03.083788 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-07-12 15:43:03.083807 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-07-12 15:43:03.083818 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:43:03.083828 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:43:03.083837 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:43:03.083847 | orchestrator | 2025-07-12 15:43:03.083856 | orchestrator | 2025-07-12 15:43:03.083865 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:43:03.083875 | orchestrator | Saturday 12 July 2025 15:42:59 +0000 (0:00:00.924) 0:02:25.269 ********* 2025-07-12 15:43:03.083884 | orchestrator | =============================================================================== 2025-07-12 15:43:03.083894 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 35.01s 2025-07-12 15:43:03.083903 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.92s 2025-07-12 15:43:03.083913 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.72s 2025-07-12 15:43:03.083922 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.60s 2025-07-12 15:43:03.083931 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 8.71s 2025-07-12 15:43:03.083941 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.36s 2025-07-12 15:43:03.083950 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.77s 2025-07-12 15:43:03.083969 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.19s 2025-07-12 15:43:03.083978 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.43s 2025-07-12 15:43:03.083987 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.98s 2025-07-12 15:43:03.083997 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 1.81s 2025-07-12 15:43:03.084006 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.74s 2025-07-12 15:43:03.084015 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.59s 2025-07-12 15:43:03.084025 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.51s 2025-07-12 15:43:03.084034 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.50s 2025-07-12 15:43:03.084044 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.45s 2025-07-12 15:43:03.084053 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.40s 2025-07-12 15:43:03.084063 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.31s 2025-07-12 15:43:03.084072 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.23s 2025-07-12 15:43:03.084082 | orchestrator | ovn-db : Get OVN_Northbound cluster leader ------------------------------ 1.10s 2025-07-12 15:43:03.084091 | orchestrator | 2025-07-12 15:43:03 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:43:03.084101 | orchestrator | 2025-07-12 15:43:03 | INFO  | Task 00936e92-b768-4ac9-8985-4f52461d8bcd is in state SUCCESS 2025-07-12 15:43:03.084111 | orchestrator | 2025-07-12 15:43:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:43:06.117007 | orchestrator | 2025-07-12 15:43:06 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:43:06.117111 | orchestrator | 2025-07-12 15:43:06 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:43:06.117129 | orchestrator | 2025-07-12 15:43:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:43:09.168092 | orchestrator | 2025-07-12 15:43:09 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:43:09.171816 | orchestrator | 2025-07-12 15:43:09 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:43:09.171860 | orchestrator | 2025-07-12 15:43:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:43:12.229723 | orchestrator | 2025-07-12 15:43:12 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:43:12.230411 | orchestrator | 2025-07-12 15:43:12 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:43:12.230452 | orchestrator | 2025-07-12 15:43:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:43:15.277348 | orchestrator | 2025-07-12 15:43:15 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:43:15.279190 | orchestrator | 2025-07-12 15:43:15 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:43:15.279255 | orchestrator | 2025-07-12 15:43:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:43:18.324931 | orchestrator | 2025-07-12 15:43:18 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:43:18.325039 | orchestrator | 2025-07-12 15:43:18 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:43:18.325054 | orchestrator | 2025-07-12 15:43:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:43:21.373858 | orchestrator | 2025-07-12 15:43:21 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:43:21.374740 | orchestrator | 2025-07-12 15:43:21 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:43:21.374771 | orchestrator | 2025-07-12 15:43:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:43:24.424729 | orchestrator | 2025-07-12 15:43:24 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:43:24.424835 | orchestrator | 2025-07-12 15:43:24 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:43:24.424852 | orchestrator | 2025-07-12 15:43:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:43:27.466371 | orchestrator | 2025-07-12 15:43:27 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:43:27.466483 | orchestrator | 2025-07-12 15:43:27 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:43:27.466498 | orchestrator | 2025-07-12 15:43:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:43:30.530824 | orchestrator | 2025-07-12 15:43:30 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:43:30.530933 | orchestrator | 2025-07-12 15:43:30 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:43:30.530949 | orchestrator | 2025-07-12 15:43:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:43:33.568751 | orchestrator | 2025-07-12 15:43:33 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:43:33.569225 | orchestrator | 2025-07-12 15:43:33 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:43:33.569252 | orchestrator | 2025-07-12 15:43:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:43:36.614324 | orchestrator | 2025-07-12 15:43:36 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:43:36.615084 | orchestrator | 2025-07-12 15:43:36 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:43:36.615339 | orchestrator | 2025-07-12 15:43:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:43:39.661867 | orchestrator | 2025-07-12 15:43:39 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:43:39.662333 | orchestrator | 2025-07-12 15:43:39 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:43:39.662585 | orchestrator | 2025-07-12 15:43:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:43:42.725820 | orchestrator | 2025-07-12 15:43:42 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:43:42.725933 | orchestrator | 2025-07-12 15:43:42 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:43:42.726325 | orchestrator | 2025-07-12 15:43:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:43:45.775916 | orchestrator | 2025-07-12 15:43:45 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:43:45.777815 | orchestrator | 2025-07-12 15:43:45 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:43:45.778186 | orchestrator | 2025-07-12 15:43:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:43:48.814786 | orchestrator | 2025-07-12 15:43:48 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:43:48.817267 | orchestrator | 2025-07-12 15:43:48 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:43:48.817300 | orchestrator | 2025-07-12 15:43:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:43:51.866651 | orchestrator | 2025-07-12 15:43:51 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:43:51.866752 | orchestrator | 2025-07-12 15:43:51 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:43:51.866766 | orchestrator | 2025-07-12 15:43:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:43:54.919407 | orchestrator | 2025-07-12 15:43:54 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:43:54.920838 | orchestrator | 2025-07-12 15:43:54 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:43:54.920872 | orchestrator | 2025-07-12 15:43:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:43:57.964111 | orchestrator | 2025-07-12 15:43:57 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:43:57.965421 | orchestrator | 2025-07-12 15:43:57 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:43:57.965590 | orchestrator | 2025-07-12 15:43:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:44:01.015188 | orchestrator | 2025-07-12 15:44:01 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:44:01.016855 | orchestrator | 2025-07-12 15:44:01 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:44:01.016893 | orchestrator | 2025-07-12 15:44:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:44:04.083477 | orchestrator | 2025-07-12 15:44:04 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:44:04.085584 | orchestrator | 2025-07-12 15:44:04 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:44:04.085872 | orchestrator | 2025-07-12 15:44:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:44:07.144602 | orchestrator | 2025-07-12 15:44:07 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:44:07.145888 | orchestrator | 2025-07-12 15:44:07 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:44:07.145992 | orchestrator | 2025-07-12 15:44:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:44:10.197363 | orchestrator | 2025-07-12 15:44:10 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:44:10.201880 | orchestrator | 2025-07-12 15:44:10 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:44:10.201925 | orchestrator | 2025-07-12 15:44:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:44:13.236471 | orchestrator | 2025-07-12 15:44:13 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:44:13.236770 | orchestrator | 2025-07-12 15:44:13 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:44:13.236806 | orchestrator | 2025-07-12 15:44:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:44:16.297053 | orchestrator | 2025-07-12 15:44:16 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:44:16.299385 | orchestrator | 2025-07-12 15:44:16 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:44:16.299442 | orchestrator | 2025-07-12 15:44:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:44:19.348235 | orchestrator | 2025-07-12 15:44:19 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:44:19.348333 | orchestrator | 2025-07-12 15:44:19 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:44:19.348371 | orchestrator | 2025-07-12 15:44:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:44:22.392318 | orchestrator | 2025-07-12 15:44:22 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:44:22.393403 | orchestrator | 2025-07-12 15:44:22 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:44:22.393448 | orchestrator | 2025-07-12 15:44:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:44:25.443253 | orchestrator | 2025-07-12 15:44:25 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:44:25.444166 | orchestrator | 2025-07-12 15:44:25 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:44:25.444449 | orchestrator | 2025-07-12 15:44:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:44:28.494072 | orchestrator | 2025-07-12 15:44:28 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:44:28.497578 | orchestrator | 2025-07-12 15:44:28 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:44:28.497612 | orchestrator | 2025-07-12 15:44:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:44:31.547000 | orchestrator | 2025-07-12 15:44:31 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:44:31.548351 | orchestrator | 2025-07-12 15:44:31 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:44:31.548951 | orchestrator | 2025-07-12 15:44:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:44:34.595495 | orchestrator | 2025-07-12 15:44:34 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:44:34.597716 | orchestrator | 2025-07-12 15:44:34 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:44:34.597753 | orchestrator | 2025-07-12 15:44:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:44:37.642464 | orchestrator | 2025-07-12 15:44:37 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:44:37.644353 | orchestrator | 2025-07-12 15:44:37 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:44:37.644836 | orchestrator | 2025-07-12 15:44:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:44:40.699046 | orchestrator | 2025-07-12 15:44:40 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:44:40.699342 | orchestrator | 2025-07-12 15:44:40 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:44:40.699365 | orchestrator | 2025-07-12 15:44:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:44:43.743341 | orchestrator | 2025-07-12 15:44:43 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:44:43.744350 | orchestrator | 2025-07-12 15:44:43 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:44:43.744574 | orchestrator | 2025-07-12 15:44:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:44:46.780711 | orchestrator | 2025-07-12 15:44:46 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:44:46.781267 | orchestrator | 2025-07-12 15:44:46 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:44:46.781303 | orchestrator | 2025-07-12 15:44:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:44:49.819580 | orchestrator | 2025-07-12 15:44:49 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:44:49.820974 | orchestrator | 2025-07-12 15:44:49 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:44:49.821164 | orchestrator | 2025-07-12 15:44:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:44:52.870290 | orchestrator | 2025-07-12 15:44:52 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:44:52.871568 | orchestrator | 2025-07-12 15:44:52 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:44:52.871720 | orchestrator | 2025-07-12 15:44:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:44:55.914958 | orchestrator | 2025-07-12 15:44:55 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:44:55.917121 | orchestrator | 2025-07-12 15:44:55 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:44:55.917172 | orchestrator | 2025-07-12 15:44:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:44:58.967977 | orchestrator | 2025-07-12 15:44:58 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:44:58.968316 | orchestrator | 2025-07-12 15:44:58 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:44:58.968342 | orchestrator | 2025-07-12 15:44:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:45:02.015293 | orchestrator | 2025-07-12 15:45:02 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:45:02.018587 | orchestrator | 2025-07-12 15:45:02 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:45:02.018626 | orchestrator | 2025-07-12 15:45:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:45:05.061536 | orchestrator | 2025-07-12 15:45:05 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:45:05.063903 | orchestrator | 2025-07-12 15:45:05 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:45:05.063945 | orchestrator | 2025-07-12 15:45:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:45:08.105390 | orchestrator | 2025-07-12 15:45:08 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:45:08.105497 | orchestrator | 2025-07-12 15:45:08 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:45:08.105532 | orchestrator | 2025-07-12 15:45:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:45:11.141297 | orchestrator | 2025-07-12 15:45:11 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:45:11.142956 | orchestrator | 2025-07-12 15:45:11 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:45:11.143029 | orchestrator | 2025-07-12 15:45:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:45:14.187604 | orchestrator | 2025-07-12 15:45:14 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:45:14.188842 | orchestrator | 2025-07-12 15:45:14 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:45:14.188890 | orchestrator | 2025-07-12 15:45:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:45:17.229234 | orchestrator | 2025-07-12 15:45:17 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:45:17.230117 | orchestrator | 2025-07-12 15:45:17 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:45:17.230150 | orchestrator | 2025-07-12 15:45:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:45:20.281578 | orchestrator | 2025-07-12 15:45:20 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:45:20.284622 | orchestrator | 2025-07-12 15:45:20 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:45:20.284659 | orchestrator | 2025-07-12 15:45:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:45:23.335541 | orchestrator | 2025-07-12 15:45:23 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:45:23.338127 | orchestrator | 2025-07-12 15:45:23 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:45:23.338216 | orchestrator | 2025-07-12 15:45:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:45:26.387830 | orchestrator | 2025-07-12 15:45:26 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:45:26.390884 | orchestrator | 2025-07-12 15:45:26 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:45:26.390924 | orchestrator | 2025-07-12 15:45:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:45:29.436842 | orchestrator | 2025-07-12 15:45:29 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:45:29.438458 | orchestrator | 2025-07-12 15:45:29 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:45:29.438505 | orchestrator | 2025-07-12 15:45:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:45:32.501771 | orchestrator | 2025-07-12 15:45:32 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state STARTED 2025-07-12 15:45:32.503907 | orchestrator | 2025-07-12 15:45:32 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:45:32.505006 | orchestrator | 2025-07-12 15:45:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:45:35.561204 | orchestrator | 2025-07-12 15:45:35 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:45:35.562597 | orchestrator | 2025-07-12 15:45:35 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:45:35.569814 | orchestrator | 2025-07-12 15:45:35 | INFO  | Task 10b0dc54-564d-48a6-b385-ef3de6308b40 is in state SUCCESS 2025-07-12 15:45:35.570660 | orchestrator | 2025-07-12 15:45:35.572651 | orchestrator | 2025-07-12 15:45:35.572778 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 15:45:35.572797 | orchestrator | 2025-07-12 15:45:35.572810 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 15:45:35.572822 | orchestrator | Saturday 12 July 2025 15:39:17 +0000 (0:00:00.608) 0:00:00.608 ********* 2025-07-12 15:45:35.572833 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:45:35.572845 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:45:35.572856 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:45:35.572866 | orchestrator | 2025-07-12 15:45:35.572878 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 15:45:35.572889 | orchestrator | Saturday 12 July 2025 15:39:18 +0000 (0:00:00.395) 0:00:01.004 ********* 2025-07-12 15:45:35.572900 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-07-12 15:45:35.572911 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-07-12 15:45:35.572922 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-07-12 15:45:35.572933 | orchestrator | 2025-07-12 15:45:35.572943 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-07-12 15:45:35.572954 | orchestrator | 2025-07-12 15:45:35.573033 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-07-12 15:45:35.573046 | orchestrator | Saturday 12 July 2025 15:39:18 +0000 (0:00:00.908) 0:00:01.913 ********* 2025-07-12 15:45:35.573171 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:45:35.573187 | orchestrator | 2025-07-12 15:45:35.573198 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-07-12 15:45:35.573211 | orchestrator | Saturday 12 July 2025 15:39:20 +0000 (0:00:01.236) 0:00:03.149 ********* 2025-07-12 15:45:35.573224 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:45:35.573236 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:45:35.573248 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:45:35.573260 | orchestrator | 2025-07-12 15:45:35.573272 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-07-12 15:45:35.573285 | orchestrator | Saturday 12 July 2025 15:39:21 +0000 (0:00:00.953) 0:00:04.103 ********* 2025-07-12 15:45:35.573297 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:45:35.573309 | orchestrator | 2025-07-12 15:45:35.573321 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-07-12 15:45:35.573333 | orchestrator | Saturday 12 July 2025 15:39:22 +0000 (0:00:00.993) 0:00:05.097 ********* 2025-07-12 15:45:35.573345 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:45:35.573358 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:45:35.573370 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:45:35.573382 | orchestrator | 2025-07-12 15:45:35.573394 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-07-12 15:45:35.573407 | orchestrator | Saturday 12 July 2025 15:39:23 +0000 (0:00:01.246) 0:00:06.343 ********* 2025-07-12 15:45:35.573446 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-07-12 15:45:35.573459 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-07-12 15:45:35.573472 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-07-12 15:45:35.573484 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-07-12 15:45:35.573496 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-07-12 15:45:35.573508 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-07-12 15:45:35.573522 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-07-12 15:45:35.573567 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-07-12 15:45:35.573579 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-07-12 15:45:35.573592 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-07-12 15:45:35.573602 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-07-12 15:45:35.573657 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-07-12 15:45:35.573668 | orchestrator | 2025-07-12 15:45:35.573679 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-07-12 15:45:35.573690 | orchestrator | Saturday 12 July 2025 15:39:26 +0000 (0:00:03.341) 0:00:09.685 ********* 2025-07-12 15:45:35.573701 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-07-12 15:45:35.573772 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-07-12 15:45:35.573784 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-07-12 15:45:35.573795 | orchestrator | 2025-07-12 15:45:35.573830 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-07-12 15:45:35.573840 | orchestrator | Saturday 12 July 2025 15:39:27 +0000 (0:00:01.224) 0:00:10.910 ********* 2025-07-12 15:45:35.573851 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-07-12 15:45:35.573918 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-07-12 15:45:35.573931 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-07-12 15:45:35.573951 | orchestrator | 2025-07-12 15:45:35.573980 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-07-12 15:45:35.573991 | orchestrator | Saturday 12 July 2025 15:39:29 +0000 (0:00:01.591) 0:00:12.501 ********* 2025-07-12 15:45:35.574002 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-07-12 15:45:35.574013 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.574122 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-07-12 15:45:35.574138 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.574148 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-07-12 15:45:35.574159 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.574197 | orchestrator | 2025-07-12 15:45:35.574208 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-07-12 15:45:35.574218 | orchestrator | Saturday 12 July 2025 15:39:30 +0000 (0:00:01.085) 0:00:13.586 ********* 2025-07-12 15:45:35.574233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-12 15:45:35.574257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-12 15:45:35.574269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-12 15:45:35.574310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 15:45:35.574362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 15:45:35.574393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 15:45:35.574406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 15:45:35.574424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 15:45:35.574435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 15:45:35.574447 | orchestrator | 2025-07-12 15:45:35.574458 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-07-12 15:45:35.574469 | orchestrator | Saturday 12 July 2025 15:39:33 +0000 (0:00:02.819) 0:00:16.405 ********* 2025-07-12 15:45:35.574480 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:45:35.574491 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:45:35.574531 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:45:35.574647 | orchestrator | 2025-07-12 15:45:35.574658 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-07-12 15:45:35.574669 | orchestrator | Saturday 12 July 2025 15:39:34 +0000 (0:00:01.113) 0:00:17.519 ********* 2025-07-12 15:45:35.574680 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-07-12 15:45:35.574691 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-07-12 15:45:35.574701 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-07-12 15:45:35.574712 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-07-12 15:45:35.574723 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-07-12 15:45:35.574733 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-07-12 15:45:35.574744 | orchestrator | 2025-07-12 15:45:35.574755 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-07-12 15:45:35.574766 | orchestrator | Saturday 12 July 2025 15:39:36 +0000 (0:00:02.122) 0:00:19.641 ********* 2025-07-12 15:45:35.574777 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:45:35.574794 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:45:35.574805 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:45:35.574816 | orchestrator | 2025-07-12 15:45:35.574826 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-07-12 15:45:35.574837 | orchestrator | Saturday 12 July 2025 15:39:38 +0000 (0:00:01.556) 0:00:21.197 ********* 2025-07-12 15:45:35.574848 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:45:35.574858 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:45:35.574869 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:45:35.574880 | orchestrator | 2025-07-12 15:45:35.574890 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-07-12 15:45:35.574901 | orchestrator | Saturday 12 July 2025 15:39:39 +0000 (0:00:01.408) 0:00:22.606 ********* 2025-07-12 15:45:35.574912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 15:45:35.574933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 15:45:35.575006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 15:45:35.575028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__edcd91ad0e641cd9f9e30a917fb727557e8c2167', '__omit_place_holder__edcd91ad0e641cd9f9e30a917fb727557e8c2167'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-12 15:45:35.575040 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.575052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 15:45:35.575163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 15:45:35.575177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 15:45:35.575195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__edcd91ad0e641cd9f9e30a917fb727557e8c2167', '__omit_place_holder__edcd91ad0e641cd9f9e30a917fb727557e8c2167'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-12 15:45:35.575251 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.575263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 15:45:35.575280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 15:45:35.575292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 15:45:35.575315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__edcd91ad0e641cd9f9e30a917fb727557e8c2167', '__omit_place_holder__edcd91ad0e641cd9f9e30a917fb727557e8c2167'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-12 15:45:35.575327 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.575338 | orchestrator | 2025-07-12 15:45:35.575349 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-07-12 15:45:35.575360 | orchestrator | Saturday 12 July 2025 15:39:40 +0000 (0:00:01.289) 0:00:23.895 ********* 2025-07-12 15:45:35.575371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-12 15:45:35.575405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-12 15:45:35.575418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-12 15:45:35.575430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 15:45:35.575441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 15:45:35.575460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 15:45:35.575471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 15:45:35.575555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__edcd91ad0e641cd9f9e30a917fb727557e8c2167', '__omit_place_holder__edcd91ad0e641cd9f9e30a917fb727557e8c2167'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-12 15:45:35.575585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__edcd91ad0e641cd9f9e30a917fb727557e8c2167', '__omit_place_holder__edcd91ad0e641cd9f9e30a917fb727557e8c2167'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-12 15:45:35.575597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 15:45:35.575613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 15:45:35.575752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__edcd91ad0e641cd9f9e30a917fb727557e8c2167', '__omit_place_holder__edcd91ad0e641cd9f9e30a917fb727557e8c2167'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-12 15:45:35.575766 | orchestrator | 2025-07-12 15:45:35.575777 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-07-12 15:45:35.575788 | orchestrator | Saturday 12 July 2025 15:39:45 +0000 (0:00:04.484) 0:00:28.380 ********* 2025-07-12 15:45:35.575800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-12 15:45:35.575811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-12 15:45:35.575833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-12 15:45:35.575845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 15:45:35.575861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 15:45:35.575880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 15:45:35.575891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 15:45:35.575903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 15:45:35.575914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 15:45:35.575925 | orchestrator | 2025-07-12 15:45:35.575936 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-07-12 15:45:35.576033 | orchestrator | Saturday 12 July 2025 15:39:49 +0000 (0:00:04.238) 0:00:32.618 ********* 2025-07-12 15:45:35.576049 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-07-12 15:45:35.576068 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-07-12 15:45:35.576080 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-07-12 15:45:35.576091 | orchestrator | 2025-07-12 15:45:35.576102 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-07-12 15:45:35.576112 | orchestrator | Saturday 12 July 2025 15:39:52 +0000 (0:00:02.748) 0:00:35.366 ********* 2025-07-12 15:45:35.576121 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-07-12 15:45:35.576131 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-07-12 15:45:35.576141 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-07-12 15:45:35.576150 | orchestrator | 2025-07-12 15:45:35.576160 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-07-12 15:45:35.576177 | orchestrator | Saturday 12 July 2025 15:39:59 +0000 (0:00:07.537) 0:00:42.904 ********* 2025-07-12 15:45:35.576186 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.576196 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.576217 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.576227 | orchestrator | 2025-07-12 15:45:35.576241 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-07-12 15:45:35.576251 | orchestrator | Saturday 12 July 2025 15:40:00 +0000 (0:00:00.972) 0:00:43.876 ********* 2025-07-12 15:45:35.576261 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-07-12 15:45:35.576272 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-07-12 15:45:35.576281 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-07-12 15:45:35.576313 | orchestrator | 2025-07-12 15:45:35.576324 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-07-12 15:45:35.576333 | orchestrator | Saturday 12 July 2025 15:40:04 +0000 (0:00:03.447) 0:00:47.324 ********* 2025-07-12 15:45:35.576343 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-07-12 15:45:35.576352 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-07-12 15:45:35.576362 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-07-12 15:45:35.576371 | orchestrator | 2025-07-12 15:45:35.576381 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-07-12 15:45:35.576390 | orchestrator | Saturday 12 July 2025 15:40:07 +0000 (0:00:02.885) 0:00:50.209 ********* 2025-07-12 15:45:35.576400 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-07-12 15:45:35.576459 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-07-12 15:45:35.576471 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-07-12 15:45:35.576481 | orchestrator | 2025-07-12 15:45:35.576490 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-07-12 15:45:35.576500 | orchestrator | Saturday 12 July 2025 15:40:09 +0000 (0:00:01.826) 0:00:52.035 ********* 2025-07-12 15:45:35.576509 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-07-12 15:45:35.576518 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-07-12 15:45:35.576527 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-07-12 15:45:35.576537 | orchestrator | 2025-07-12 15:45:35.576546 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-07-12 15:45:35.576583 | orchestrator | Saturday 12 July 2025 15:40:10 +0000 (0:00:01.887) 0:00:53.923 ********* 2025-07-12 15:45:35.576619 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:45:35.576631 | orchestrator | 2025-07-12 15:45:35.576640 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-07-12 15:45:35.576730 | orchestrator | Saturday 12 July 2025 15:40:11 +0000 (0:00:00.688) 0:00:54.612 ********* 2025-07-12 15:45:35.576741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-12 15:45:35.576766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-12 15:45:35.576777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-12 15:45:35.576792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 15:45:35.576803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 15:45:35.576813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 15:45:35.576823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 15:45:35.576833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 15:45:35.576856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 15:45:35.576866 | orchestrator | 2025-07-12 15:45:35.576876 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-07-12 15:45:35.576886 | orchestrator | Saturday 12 July 2025 15:40:15 +0000 (0:00:03.623) 0:00:58.235 ********* 2025-07-12 15:45:35.576900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 15:45:35.576911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 15:45:35.576921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 15:45:35.576930 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.576941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 15:45:35.576951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 15:45:35.576990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 15:45:35.577001 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.577011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 15:45:35.577026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 15:45:35.577036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 15:45:35.577046 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.577056 | orchestrator | 2025-07-12 15:45:35.577065 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-07-12 15:45:35.577075 | orchestrator | Saturday 12 July 2025 15:40:15 +0000 (0:00:00.659) 0:00:58.894 ********* 2025-07-12 15:45:35.577085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 15:45:35.577101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 15:45:35.577116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 15:45:35.577127 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.577136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 15:45:35.577155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 15:45:35.577165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 15:45:35.577175 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.577185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 15:45:35.577195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 15:45:35.577251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 15:45:35.577264 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.577308 | orchestrator | 2025-07-12 15:45:35.577319 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-07-12 15:45:35.577328 | orchestrator | Saturday 12 July 2025 15:40:16 +0000 (0:00:01.008) 0:00:59.903 ********* 2025-07-12 15:45:35.577345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 15:45:35.577361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 15:45:35.577372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 15:45:35.577381 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.577391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 15:45:35.577462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 15:45:35.577474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 15:45:35.577484 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.577500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 15:45:35.577511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 15:45:35.577525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 15:45:35.577536 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.577546 | orchestrator | 2025-07-12 15:45:35.577556 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-07-12 15:45:35.577565 | orchestrator | Saturday 12 July 2025 15:40:17 +0000 (0:00:00.564) 0:01:00.468 ********* 2025-07-12 15:45:35.577575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 15:45:35.577592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 15:45:35.577626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 15:45:35.577637 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.577647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 15:45:35.577665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 15:45:35.577680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 15:45:35.577691 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.577701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 15:45:35.577722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 15:45:35.577732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 15:45:35.577742 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.577752 | orchestrator | 2025-07-12 15:45:35.577761 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-07-12 15:45:35.577771 | orchestrator | Saturday 12 July 2025 15:40:18 +0000 (0:00:00.603) 0:01:01.071 ********* 2025-07-12 15:45:35.577781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 15:45:35.577797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 15:45:35.577807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 15:45:35.577817 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.577892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 15:45:35.577915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 15:45:35.577925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 15:45:35.577935 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.577945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 15:45:35.577977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 15:45:35.577988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 15:45:35.577998 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.578008 | orchestrator | 2025-07-12 15:45:35.578059 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-07-12 15:45:35.578072 | orchestrator | Saturday 12 July 2025 15:40:19 +0000 (0:00:01.225) 0:01:02.296 ********* 2025-07-12 15:45:35.578087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 15:45:35.578104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 15:45:35.578179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 15:45:35.578189 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.578199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 15:45:35.578210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 15:45:35.578229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 15:45:35.578239 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.578254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 15:45:35.578272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 15:45:35.578282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 15:45:35.578292 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.578302 | orchestrator | 2025-07-12 15:45:35.578311 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-07-12 15:45:35.578321 | orchestrator | Saturday 12 July 2025 15:40:19 +0000 (0:00:00.623) 0:01:02.919 ********* 2025-07-12 15:45:35.578331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 15:45:35.578341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 15:45:35.578357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 15:45:35.578368 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.578378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 15:45:35.578401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 15:45:35.578412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 15:45:35.578422 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.578432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 15:45:35.578442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 15:45:35.578452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 15:45:35.578462 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.578472 | orchestrator | 2025-07-12 15:45:35.578482 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-07-12 15:45:35.578496 | orchestrator | Saturday 12 July 2025 15:40:20 +0000 (0:00:00.828) 0:01:03.748 ********* 2025-07-12 15:45:35.578506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 15:45:35.578527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 15:45:35.578567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 15:45:35.578578 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.578629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 15:45:35.578641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 15:45:35.578651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 15:45:35.578661 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.578700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 15:45:35.578719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 15:45:35.578758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 15:45:35.578769 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.578779 | orchestrator | 2025-07-12 15:45:35.578789 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-07-12 15:45:35.578798 | orchestrator | Saturday 12 July 2025 15:40:22 +0000 (0:00:01.356) 0:01:05.104 ********* 2025-07-12 15:45:35.578808 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-07-12 15:45:35.578818 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-07-12 15:45:35.578828 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-07-12 15:45:35.578837 | orchestrator | 2025-07-12 15:45:35.578847 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-07-12 15:45:35.578856 | orchestrator | Saturday 12 July 2025 15:40:23 +0000 (0:00:01.770) 0:01:06.875 ********* 2025-07-12 15:45:35.578866 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-07-12 15:45:35.578876 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-07-12 15:45:35.578885 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-07-12 15:45:35.578894 | orchestrator | 2025-07-12 15:45:35.578904 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-07-12 15:45:35.578914 | orchestrator | Saturday 12 July 2025 15:40:25 +0000 (0:00:01.513) 0:01:08.388 ********* 2025-07-12 15:45:35.578924 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-07-12 15:45:35.578933 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-07-12 15:45:35.578943 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-07-12 15:45:35.578952 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-12 15:45:35.579080 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.579092 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-12 15:45:35.579125 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.579135 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-12 15:45:35.579144 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.579162 | orchestrator | 2025-07-12 15:45:35.579172 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-07-12 15:45:35.579182 | orchestrator | Saturday 12 July 2025 15:40:26 +0000 (0:00:00.860) 0:01:09.249 ********* 2025-07-12 15:45:35.579199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-12 15:45:35.579211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-12 15:45:35.579226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-12 15:45:35.579237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 15:45:35.579247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 15:45:35.579257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 15:45:35.579272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 15:45:35.579285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 15:45:35.579294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 15:45:35.579302 | orchestrator | 2025-07-12 15:45:35.579310 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-07-12 15:45:35.579318 | orchestrator | Saturday 12 July 2025 15:40:29 +0000 (0:00:02.840) 0:01:12.089 ********* 2025-07-12 15:45:35.579326 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:45:35.579334 | orchestrator | 2025-07-12 15:45:35.579342 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-07-12 15:45:35.579350 | orchestrator | Saturday 12 July 2025 15:40:29 +0000 (0:00:00.850) 0:01:12.939 ********* 2025-07-12 15:45:35.579359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-07-12 15:45:35.579368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-12 15:45:35.579377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-07-12 15:45:35.579396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.579405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-12 15:45:35.579555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.579576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.579585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.579594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-07-12 15:45:35.579634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-12 15:45:35.579652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.579665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.579673 | orchestrator | 2025-07-12 15:45:35.579682 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-07-12 15:45:35.579690 | orchestrator | Saturday 12 July 2025 15:40:33 +0000 (0:00:04.025) 0:01:16.965 ********* 2025-07-12 15:45:35.579698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-07-12 15:45:35.579707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-12 15:45:35.579767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-07-12 15:45:35.579782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.579791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.579803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-12 15:45:35.579812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-07-12 15:45:35.579820 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.579829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-12 15:45:35.579843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.579851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.579872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.579881 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.579893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.579901 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.579909 | orchestrator | 2025-07-12 15:45:35.579917 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-07-12 15:45:35.579925 | orchestrator | Saturday 12 July 2025 15:40:34 +0000 (0:00:00.670) 0:01:17.635 ********* 2025-07-12 15:45:35.579934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-07-12 15:45:35.579955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-07-12 15:45:35.579979 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.580007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-07-12 15:45:35.580023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-07-12 15:45:35.580031 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.580039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-07-12 15:45:35.580047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-07-12 15:45:35.580055 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.580063 | orchestrator | 2025-07-12 15:45:35.580070 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-07-12 15:45:35.580079 | orchestrator | Saturday 12 July 2025 15:40:35 +0000 (0:00:01.226) 0:01:18.862 ********* 2025-07-12 15:45:35.580116 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:45:35.580125 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:45:35.580184 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:45:35.580193 | orchestrator | 2025-07-12 15:45:35.580200 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-07-12 15:45:35.580208 | orchestrator | Saturday 12 July 2025 15:40:37 +0000 (0:00:01.400) 0:01:20.262 ********* 2025-07-12 15:45:35.580216 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:45:35.580224 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:45:35.580240 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:45:35.580248 | orchestrator | 2025-07-12 15:45:35.580256 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-07-12 15:45:35.580264 | orchestrator | Saturday 12 July 2025 15:40:39 +0000 (0:00:02.168) 0:01:22.431 ********* 2025-07-12 15:45:35.580272 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:45:35.580279 | orchestrator | 2025-07-12 15:45:35.580287 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-07-12 15:45:35.580294 | orchestrator | Saturday 12 July 2025 15:40:40 +0000 (0:00:00.758) 0:01:23.190 ********* 2025-07-12 15:45:35.580335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 15:45:35.580352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.580370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.580426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 15:45:35.580436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.580445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.580460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 15:45:35.580473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.580486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.580494 | orchestrator | 2025-07-12 15:45:35.580524 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-07-12 15:45:35.580533 | orchestrator | Saturday 12 July 2025 15:40:44 +0000 (0:00:04.233) 0:01:27.424 ********* 2025-07-12 15:45:35.580541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 15:45:35.580549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.580563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.580572 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.580584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 15:45:35.580601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.580609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.580617 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.580626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 15:45:35.580638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.580646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.580659 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.580667 | orchestrator | 2025-07-12 15:45:35.580675 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-07-12 15:45:35.580683 | orchestrator | Saturday 12 July 2025 15:40:45 +0000 (0:00:00.777) 0:01:28.201 ********* 2025-07-12 15:45:35.580694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-12 15:45:35.580703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-12 15:45:35.580740 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.580749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-12 15:45:35.580757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-12 15:45:35.580765 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.580773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-12 15:45:35.580780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-12 15:45:35.580788 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.580913 | orchestrator | 2025-07-12 15:45:35.580934 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-07-12 15:45:35.580942 | orchestrator | Saturday 12 July 2025 15:40:46 +0000 (0:00:00.814) 0:01:29.016 ********* 2025-07-12 15:45:35.580949 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:45:35.580973 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:45:35.580982 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:45:35.580989 | orchestrator | 2025-07-12 15:45:35.580997 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-07-12 15:45:35.581005 | orchestrator | Saturday 12 July 2025 15:40:47 +0000 (0:00:01.491) 0:01:30.507 ********* 2025-07-12 15:45:35.581012 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:45:35.581020 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:45:35.581027 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:45:35.581035 | orchestrator | 2025-07-12 15:45:35.581043 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-07-12 15:45:35.581051 | orchestrator | Saturday 12 July 2025 15:40:50 +0000 (0:00:02.605) 0:01:33.113 ********* 2025-07-12 15:45:35.581058 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.581066 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.581074 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.581081 | orchestrator | 2025-07-12 15:45:35.581089 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-07-12 15:45:35.581096 | orchestrator | Saturday 12 July 2025 15:40:50 +0000 (0:00:00.310) 0:01:33.423 ********* 2025-07-12 15:45:35.581104 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:45:35.581112 | orchestrator | 2025-07-12 15:45:35.581120 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-07-12 15:45:35.581134 | orchestrator | Saturday 12 July 2025 15:40:51 +0000 (0:00:00.643) 0:01:34.067 ********* 2025-07-12 15:45:35.581149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-07-12 15:45:35.581163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-07-12 15:45:35.581172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-07-12 15:45:35.581181 | orchestrator | 2025-07-12 15:45:35.581189 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-07-12 15:45:35.581197 | orchestrator | Saturday 12 July 2025 15:40:54 +0000 (0:00:02.908) 0:01:36.975 ********* 2025-07-12 15:45:35.581205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-07-12 15:45:35.581213 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.581221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-07-12 15:45:35.581234 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.581248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-07-12 15:45:35.581256 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.581264 | orchestrator | 2025-07-12 15:45:35.581272 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-07-12 15:45:35.581280 | orchestrator | Saturday 12 July 2025 15:40:55 +0000 (0:00:01.684) 0:01:38.659 ********* 2025-07-12 15:45:35.581292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-12 15:45:35.581302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-12 15:45:35.581310 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.581319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-12 15:45:35.581327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-12 15:45:35.581335 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.581343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-12 15:45:35.581356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-12 15:45:35.581364 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.581372 | orchestrator | 2025-07-12 15:45:35.581380 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-07-12 15:45:35.581387 | orchestrator | Saturday 12 July 2025 15:40:57 +0000 (0:00:01.850) 0:01:40.509 ********* 2025-07-12 15:45:35.581395 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.581403 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.581436 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.581444 | orchestrator | 2025-07-12 15:45:35.581452 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-07-12 15:45:35.581460 | orchestrator | Saturday 12 July 2025 15:40:58 +0000 (0:00:00.942) 0:01:41.452 ********* 2025-07-12 15:45:35.581468 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.581476 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.581483 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.581491 | orchestrator | 2025-07-12 15:45:35.581499 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-07-12 15:45:35.581521 | orchestrator | Saturday 12 July 2025 15:40:59 +0000 (0:00:01.009) 0:01:42.461 ********* 2025-07-12 15:45:35.581530 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:45:35.581537 | orchestrator | 2025-07-12 15:45:35.581553 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-07-12 15:45:35.581561 | orchestrator | Saturday 12 July 2025 15:41:00 +0000 (0:00:00.916) 0:01:43.378 ********* 2025-07-12 15:45:35.581573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 15:45:35.581583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.581592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.581606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.581621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 15:45:35.581629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.581641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.581650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.581664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 15:45:35.581672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.581686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.581743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.581752 | orchestrator | 2025-07-12 15:45:35.581760 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-07-12 15:45:35.581768 | orchestrator | Saturday 12 July 2025 15:41:03 +0000 (0:00:03.474) 0:01:46.853 ********* 2025-07-12 15:45:35.581776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 15:45:35.581791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.581799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.581812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.581821 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.581833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 15:45:35.581841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.581858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.581866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.581874 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.581888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 15:45:35.581912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.581920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.581934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.581942 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.581950 | orchestrator | 2025-07-12 15:45:35.582003 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-07-12 15:45:35.582013 | orchestrator | Saturday 12 July 2025 15:41:05 +0000 (0:00:01.147) 0:01:48.000 ********* 2025-07-12 15:45:35.582051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-12 15:45:35.582094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-12 15:45:35.582105 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.582113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-12 15:45:35.582121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-12 15:45:35.582129 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.582143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-12 15:45:35.582152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-12 15:45:35.582160 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.582168 | orchestrator | 2025-07-12 15:45:35.582176 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-07-12 15:45:35.582183 | orchestrator | Saturday 12 July 2025 15:41:05 +0000 (0:00:00.858) 0:01:48.858 ********* 2025-07-12 15:45:35.582191 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:45:35.582199 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:45:35.582207 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:45:35.582215 | orchestrator | 2025-07-12 15:45:35.582223 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-07-12 15:45:35.582230 | orchestrator | Saturday 12 July 2025 15:41:07 +0000 (0:00:01.253) 0:01:50.111 ********* 2025-07-12 15:45:35.582245 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:45:35.582253 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:45:35.582261 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:45:35.582268 | orchestrator | 2025-07-12 15:45:35.582276 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-07-12 15:45:35.582284 | orchestrator | Saturday 12 July 2025 15:41:09 +0000 (0:00:01.928) 0:01:52.040 ********* 2025-07-12 15:45:35.582296 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.582304 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.582311 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.582319 | orchestrator | 2025-07-12 15:45:35.582327 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-07-12 15:45:35.582344 | orchestrator | Saturday 12 July 2025 15:41:09 +0000 (0:00:00.522) 0:01:52.562 ********* 2025-07-12 15:45:35.582352 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.582359 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.582366 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.582373 | orchestrator | 2025-07-12 15:45:35.582379 | orchestrator | TASK [include_role : designate] ************************************************ 2025-07-12 15:45:35.582386 | orchestrator | Saturday 12 July 2025 15:41:09 +0000 (0:00:00.303) 0:01:52.866 ********* 2025-07-12 15:45:35.582393 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:45:35.582399 | orchestrator | 2025-07-12 15:45:35.582406 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-07-12 15:45:35.582412 | orchestrator | Saturday 12 July 2025 15:41:10 +0000 (0:00:00.759) 0:01:53.626 ********* 2025-07-12 15:45:35.582420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 15:45:35.582427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 15:45:35.582435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.582448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.582464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 15:45:35.582472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 15:45:35.582478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.582486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.582493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.582504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.582516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.582526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.582533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.582540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.582547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 15:45:35.582558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 15:45:35.582570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.582581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.582588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.582594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.582619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.582626 | orchestrator | 2025-07-12 15:45:35.582633 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-07-12 15:45:35.582640 | orchestrator | Saturday 12 July 2025 15:41:15 +0000 (0:00:04.783) 0:01:58.409 ********* 2025-07-12 15:45:35.582678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 15:45:35.582686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 15:45:35.582697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.582704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.582711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.582718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.582729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.582736 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.582747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 15:45:35.582754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 15:45:35.582761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.582768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.582775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.582787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.582833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.582847 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.582858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 15:45:35.582865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 15:45:35.582872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.582879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.582890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.582902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.582909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.582916 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.582923 | orchestrator | 2025-07-12 15:45:35.582933 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-07-12 15:45:35.582940 | orchestrator | Saturday 12 July 2025 15:41:16 +0000 (0:00:01.059) 0:01:59.469 ********* 2025-07-12 15:45:35.582947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-07-12 15:45:35.582954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-07-12 15:45:35.582976 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.582983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-07-12 15:45:35.582989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-07-12 15:45:35.582996 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.583003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-07-12 15:45:35.583010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-07-12 15:45:35.583016 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.583027 | orchestrator | 2025-07-12 15:45:35.583034 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-07-12 15:45:35.583040 | orchestrator | Saturday 12 July 2025 15:41:17 +0000 (0:00:01.125) 0:02:00.595 ********* 2025-07-12 15:45:35.583047 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:45:35.583054 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:45:35.583060 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:45:35.583067 | orchestrator | 2025-07-12 15:45:35.583073 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-07-12 15:45:35.583080 | orchestrator | Saturday 12 July 2025 15:41:19 +0000 (0:00:01.795) 0:02:02.390 ********* 2025-07-12 15:45:35.583087 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:45:35.583093 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:45:35.583099 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:45:35.583106 | orchestrator | 2025-07-12 15:45:35.583113 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-07-12 15:45:35.583119 | orchestrator | Saturday 12 July 2025 15:41:21 +0000 (0:00:02.016) 0:02:04.407 ********* 2025-07-12 15:45:35.583126 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.583132 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.583139 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.583145 | orchestrator | 2025-07-12 15:45:35.583152 | orchestrator | TASK [include_role : glance] *************************************************** 2025-07-12 15:45:35.583158 | orchestrator | Saturday 12 July 2025 15:41:21 +0000 (0:00:00.313) 0:02:04.720 ********* 2025-07-12 15:45:35.583165 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:45:35.583172 | orchestrator | 2025-07-12 15:45:35.583178 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-07-12 15:45:35.583185 | orchestrator | Saturday 12 July 2025 15:41:22 +0000 (0:00:00.795) 0:02:05.516 ********* 2025-07-12 15:45:35.583201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 15:45:35.583210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-12 15:45:35.583227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 15:45:35.583239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-12 15:45:35.583255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 15:45:35.583266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-12 15:45:35.583278 | orchestrator | 2025-07-12 15:45:35.583285 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-07-12 15:45:35.583292 | orchestrator | Saturday 12 July 2025 15:41:26 +0000 (0:00:04.204) 0:02:09.720 ********* 2025-07-12 15:45:35.583303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 15:45:35.583314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 15:45:35.583327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-12 15:45:35.583334 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.583349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-12 15:45:35.583364 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.583371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 15:45:35.583388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-12 15:45:35.583396 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.583407 | orchestrator | 2025-07-12 15:45:35.583413 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-07-12 15:45:35.583420 | orchestrator | Saturday 12 July 2025 15:41:29 +0000 (0:00:02.793) 0:02:12.514 ********* 2025-07-12 15:45:35.583427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-12 15:45:35.583434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-12 15:45:35.583441 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.583448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-12 15:45:35.583455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-12 15:45:35.583462 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.583469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-12 15:45:35.583480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-12 15:45:35.583487 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.583493 | orchestrator | 2025-07-12 15:45:35.583500 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-07-12 15:45:35.583507 | orchestrator | Saturday 12 July 2025 15:41:32 +0000 (0:00:03.030) 0:02:15.544 ********* 2025-07-12 15:45:35.583513 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:45:35.583520 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:45:35.583531 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:45:35.583537 | orchestrator | 2025-07-12 15:45:35.583544 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-07-12 15:45:35.583550 | orchestrator | Saturday 12 July 2025 15:41:34 +0000 (0:00:01.561) 0:02:17.106 ********* 2025-07-12 15:45:35.583557 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:45:35.583564 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:45:35.583570 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:45:35.583577 | orchestrator | 2025-07-12 15:45:35.583587 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-07-12 15:45:35.583593 | orchestrator | Saturday 12 July 2025 15:41:36 +0000 (0:00:01.992) 0:02:19.098 ********* 2025-07-12 15:45:35.583600 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.583606 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.583613 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.583619 | orchestrator | 2025-07-12 15:45:35.583626 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-07-12 15:45:35.583633 | orchestrator | Saturday 12 July 2025 15:41:36 +0000 (0:00:00.306) 0:02:19.405 ********* 2025-07-12 15:45:35.583639 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:45:35.583646 | orchestrator | 2025-07-12 15:45:35.583652 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-07-12 15:45:35.583659 | orchestrator | Saturday 12 July 2025 15:41:37 +0000 (0:00:00.824) 0:02:20.229 ********* 2025-07-12 15:45:35.583666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 15:45:35.583673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 15:45:35.583680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 15:45:35.583687 | orchestrator | 2025-07-12 15:45:35.583693 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-07-12 15:45:35.583700 | orchestrator | Saturday 12 July 2025 15:41:40 +0000 (0:00:03.241) 0:02:23.471 ********* 2025-07-12 15:45:35.583719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 15:45:35.583735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 15:45:35.583742 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.583749 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.583756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 15:45:35.583763 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.583770 | orchestrator | 2025-07-12 15:45:35.583776 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-07-12 15:45:35.583783 | orchestrator | Saturday 12 July 2025 15:41:40 +0000 (0:00:00.422) 0:02:23.893 ********* 2025-07-12 15:45:35.583789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-07-12 15:45:35.583796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-07-12 15:45:35.583803 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.583809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-07-12 15:45:35.583816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-07-12 15:45:35.583823 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.583843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-07-12 15:45:35.583850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-07-12 15:45:35.583856 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.583868 | orchestrator | 2025-07-12 15:45:35.583875 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-07-12 15:45:35.583882 | orchestrator | Saturday 12 July 2025 15:41:41 +0000 (0:00:00.668) 0:02:24.562 ********* 2025-07-12 15:45:35.583888 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:45:35.583895 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:45:35.583901 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:45:35.583907 | orchestrator | 2025-07-12 15:45:35.583914 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-07-12 15:45:35.583921 | orchestrator | Saturday 12 July 2025 15:41:43 +0000 (0:00:01.612) 0:02:26.174 ********* 2025-07-12 15:45:35.583927 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:45:35.583934 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:45:35.583940 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:45:35.583947 | orchestrator | 2025-07-12 15:45:35.583970 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-07-12 15:45:35.583977 | orchestrator | Saturday 12 July 2025 15:41:45 +0000 (0:00:01.920) 0:02:28.094 ********* 2025-07-12 15:45:35.583984 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.583990 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.583997 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.584004 | orchestrator | 2025-07-12 15:45:35.584010 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-07-12 15:45:35.584017 | orchestrator | Saturday 12 July 2025 15:41:45 +0000 (0:00:00.314) 0:02:28.409 ********* 2025-07-12 15:45:35.584023 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:45:35.584030 | orchestrator | 2025-07-12 15:45:35.584036 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-07-12 15:45:35.584043 | orchestrator | Saturday 12 July 2025 15:41:46 +0000 (0:00:00.881) 0:02:29.290 ********* 2025-07-12 15:45:35.584055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 15:45:35.584072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 15:45:35.584085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 15:45:35.584097 | orchestrator | 2025-07-12 15:45:35.584103 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-07-12 15:45:35.584110 | orchestrator | Saturday 12 July 2025 15:41:50 +0000 (0:00:03.698) 0:02:32.989 ********* 2025-07-12 15:45:35.584126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 15:45:35.584134 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.584141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 15:45:35.584153 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.584169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 15:45:35.584177 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.584183 | orchestrator | 2025-07-12 15:45:35.584190 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-07-12 15:45:35.584196 | orchestrator | Saturday 12 July 2025 15:41:50 +0000 (0:00:00.783) 0:02:33.773 ********* 2025-07-12 15:45:35.584203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-12 15:45:35.584211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-12 15:45:35.584224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-12 15:45:35.584232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-12 15:45:35.584239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-07-12 15:45:35.584245 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.584252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-12 15:45:35.584259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-12 15:45:35.584269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-12 15:45:35.584277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-12 15:45:35.584283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-12 15:45:35.584293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-12 15:45:35.584300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-07-12 15:45:35.584307 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.584314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-12 15:45:35.584321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-12 15:45:35.584331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-07-12 15:45:35.584338 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.584345 | orchestrator | 2025-07-12 15:45:35.584351 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-07-12 15:45:35.584358 | orchestrator | Saturday 12 July 2025 15:41:51 +0000 (0:00:01.125) 0:02:34.899 ********* 2025-07-12 15:45:35.584364 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:45:35.584371 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:45:35.584377 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:45:35.584384 | orchestrator | 2025-07-12 15:45:35.584390 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-07-12 15:45:35.584397 | orchestrator | Saturday 12 July 2025 15:41:53 +0000 (0:00:01.643) 0:02:36.543 ********* 2025-07-12 15:45:35.584403 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:45:35.584410 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:45:35.584416 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:45:35.584423 | orchestrator | 2025-07-12 15:45:35.584429 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-07-12 15:45:35.584435 | orchestrator | Saturday 12 July 2025 15:41:55 +0000 (0:00:02.169) 0:02:38.712 ********* 2025-07-12 15:45:35.584442 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.584448 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.584455 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.584461 | orchestrator | 2025-07-12 15:45:35.584468 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-07-12 15:45:35.584474 | orchestrator | Saturday 12 July 2025 15:41:56 +0000 (0:00:00.305) 0:02:39.017 ********* 2025-07-12 15:45:35.584481 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.584487 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.584494 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.584500 | orchestrator | 2025-07-12 15:45:35.584507 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-07-12 15:45:35.584513 | orchestrator | Saturday 12 July 2025 15:41:56 +0000 (0:00:00.326) 0:02:39.344 ********* 2025-07-12 15:45:35.584520 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:45:35.584526 | orchestrator | 2025-07-12 15:45:35.584532 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-07-12 15:45:35.584539 | orchestrator | Saturday 12 July 2025 15:41:57 +0000 (0:00:01.207) 0:02:40.552 ********* 2025-07-12 15:45:35.584550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 15:45:35.584561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 15:45:35.584572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 15:45:35.584580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 15:45:35.584587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 15:45:35.584599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 15:45:35.584609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 15:45:35.584621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 15:45:35.584628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 15:45:35.584635 | orchestrator | 2025-07-12 15:45:35.584642 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-07-12 15:45:35.584648 | orchestrator | Saturday 12 July 2025 15:42:01 +0000 (0:00:04.359) 0:02:44.911 ********* 2025-07-12 15:45:35.584656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 15:45:35.584666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 15:45:35.584674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 15:45:35.584685 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.584695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 15:45:35.584703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 15:45:35.584710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 15:45:35.584720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 15:45:35.584728 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.584735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 15:45:35.584750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 15:45:35.584758 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.584764 | orchestrator | 2025-07-12 15:45:35.584771 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-07-12 15:45:35.584778 | orchestrator | Saturday 12 July 2025 15:42:02 +0000 (0:00:00.561) 0:02:45.473 ********* 2025-07-12 15:45:35.584785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-12 15:45:35.584792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-12 15:45:35.584799 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.584806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-12 15:45:35.584813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-12 15:45:35.584820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-12 15:45:35.584827 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.584834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-12 15:45:35.584840 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.584847 | orchestrator | 2025-07-12 15:45:35.584854 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-07-12 15:45:35.584860 | orchestrator | Saturday 12 July 2025 15:42:03 +0000 (0:00:00.935) 0:02:46.409 ********* 2025-07-12 15:45:35.584867 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:45:35.584873 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:45:35.584880 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:45:35.584886 | orchestrator | 2025-07-12 15:45:35.584893 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-07-12 15:45:35.584904 | orchestrator | Saturday 12 July 2025 15:42:04 +0000 (0:00:01.122) 0:02:47.531 ********* 2025-07-12 15:45:35.584910 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:45:35.584916 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:45:35.584923 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:45:35.584929 | orchestrator | 2025-07-12 15:45:35.584936 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-07-12 15:45:35.584943 | orchestrator | Saturday 12 July 2025 15:42:06 +0000 (0:00:01.841) 0:02:49.373 ********* 2025-07-12 15:45:35.584953 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.585002 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.585010 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.585017 | orchestrator | 2025-07-12 15:45:35.585023 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-07-12 15:45:35.585030 | orchestrator | Saturday 12 July 2025 15:42:06 +0000 (0:00:00.294) 0:02:49.667 ********* 2025-07-12 15:45:35.585037 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:45:35.585043 | orchestrator | 2025-07-12 15:45:35.585050 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-07-12 15:45:35.585056 | orchestrator | Saturday 12 July 2025 15:42:07 +0000 (0:00:01.178) 0:02:50.845 ********* 2025-07-12 15:45:35.585067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 15:45:35.585075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.585082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 15:45:35.585089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.585106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 15:45:35.585116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.585124 | orchestrator | 2025-07-12 15:45:35.585130 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-07-12 15:45:35.585137 | orchestrator | Saturday 12 July 2025 15:42:11 +0000 (0:00:03.166) 0:02:54.012 ********* 2025-07-12 15:45:35.585144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 15:45:35.585151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.585164 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.585174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 15:45:35.585181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.585187 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.585196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 15:45:35.585203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.585210 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.585216 | orchestrator | 2025-07-12 15:45:35.585222 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-07-12 15:45:35.585232 | orchestrator | Saturday 12 July 2025 15:42:11 +0000 (0:00:00.698) 0:02:54.711 ********* 2025-07-12 15:45:35.585238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-07-12 15:45:35.585245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-07-12 15:45:35.585251 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.585257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-07-12 15:45:35.585266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-07-12 15:45:35.585276 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.585285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-07-12 15:45:35.585296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-07-12 15:45:35.585310 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.585320 | orchestrator | 2025-07-12 15:45:35.585330 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-07-12 15:45:35.585339 | orchestrator | Saturday 12 July 2025 15:42:13 +0000 (0:00:01.395) 0:02:56.106 ********* 2025-07-12 15:45:35.585348 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:45:35.585357 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:45:35.585366 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:45:35.585375 | orchestrator | 2025-07-12 15:45:35.585384 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-07-12 15:45:35.585393 | orchestrator | Saturday 12 July 2025 15:42:14 +0000 (0:00:01.196) 0:02:57.303 ********* 2025-07-12 15:45:35.585404 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:45:35.585414 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:45:35.585424 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:45:35.585434 | orchestrator | 2025-07-12 15:45:35.585444 | orchestrator | TASK [include_role : manila] *************************************************** 2025-07-12 15:45:35.585455 | orchestrator | Saturday 12 July 2025 15:42:16 +0000 (0:00:01.991) 0:02:59.294 ********* 2025-07-12 15:45:35.585462 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:45:35.585468 | orchestrator | 2025-07-12 15:45:35.585474 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-07-12 15:45:35.585480 | orchestrator | Saturday 12 July 2025 15:42:17 +0000 (0:00:01.061) 0:03:00.355 ********* 2025-07-12 15:45:35.585491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-07-12 15:45:35.585503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.585510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-07-12 15:45:35.585517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.585529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.585536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.585581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.585598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.585605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-07-12 15:45:35.585611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.585622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.585632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.585638 | orchestrator | 2025-07-12 15:45:35.585645 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-07-12 15:45:35.585651 | orchestrator | Saturday 12 July 2025 15:42:20 +0000 (0:00:03.521) 0:03:03.876 ********* 2025-07-12 15:45:35.585658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-07-12 15:45:35.585668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.585674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.585681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.585687 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.585822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-07-12 15:45:35.585838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.585850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.585856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.585863 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.585869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-07-12 15:45:35.585879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.585886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.585896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.585906 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.585913 | orchestrator | 2025-07-12 15:45:35.585919 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-07-12 15:45:35.585925 | orchestrator | Saturday 12 July 2025 15:42:21 +0000 (0:00:00.707) 0:03:04.584 ********* 2025-07-12 15:45:35.585932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-07-12 15:45:35.585939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-07-12 15:45:35.585945 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.585951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-07-12 15:45:35.585974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-07-12 15:45:35.585981 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.585987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-07-12 15:45:35.585994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-07-12 15:45:35.586000 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.586006 | orchestrator | 2025-07-12 15:45:35.586012 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-07-12 15:45:35.586052 | orchestrator | Saturday 12 July 2025 15:42:22 +0000 (0:00:01.075) 0:03:05.659 ********* 2025-07-12 15:45:35.586059 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:45:35.586065 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:45:35.586071 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:45:35.586077 | orchestrator | 2025-07-12 15:45:35.586083 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-07-12 15:45:35.586089 | orchestrator | Saturday 12 July 2025 15:42:24 +0000 (0:00:01.633) 0:03:07.293 ********* 2025-07-12 15:45:35.586096 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:45:35.586102 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:45:35.586109 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:45:35.586115 | orchestrator | 2025-07-12 15:45:35.586121 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-07-12 15:45:35.586127 | orchestrator | Saturday 12 July 2025 15:42:26 +0000 (0:00:02.096) 0:03:09.389 ********* 2025-07-12 15:45:35.586133 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:45:35.586139 | orchestrator | 2025-07-12 15:45:35.586145 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-07-12 15:45:35.586151 | orchestrator | Saturday 12 July 2025 15:42:27 +0000 (0:00:01.086) 0:03:10.475 ********* 2025-07-12 15:45:35.586158 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 15:45:35.586164 | orchestrator | 2025-07-12 15:45:35.586170 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-07-12 15:45:35.586176 | orchestrator | Saturday 12 July 2025 15:42:30 +0000 (0:00:03.253) 0:03:13.728 ********* 2025-07-12 15:45:35.586197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 15:45:35.586206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-12 15:45:35.586213 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.586224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 15:45:35.586235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-12 15:45:35.586242 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.586252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 15:45:35.586259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-12 15:45:35.586265 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.586272 | orchestrator | 2025-07-12 15:45:35.586278 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-07-12 15:45:35.586284 | orchestrator | Saturday 12 July 2025 15:42:33 +0000 (0:00:02.481) 0:03:16.210 ********* 2025-07-12 15:45:35.586298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 15:45:35.586310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-12 15:45:35.586316 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.586323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 15:45:35.586340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-12 15:45:35.586347 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.586357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 15:45:35.586364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-12 15:45:35.586370 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.586376 | orchestrator | 2025-07-12 15:45:35.586383 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-07-12 15:45:35.586389 | orchestrator | Saturday 12 July 2025 15:42:35 +0000 (0:00:02.130) 0:03:18.341 ********* 2025-07-12 15:45:35.586395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-12 15:45:35.586408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-12 15:45:35.586415 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.586422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-12 15:45:35.586433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-12 15:45:35.586440 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.586447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-12 15:45:35.586454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-12 15:45:35.586461 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.586468 | orchestrator | 2025-07-12 15:45:35.586476 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-07-12 15:45:35.586482 | orchestrator | Saturday 12 July 2025 15:42:37 +0000 (0:00:02.472) 0:03:20.813 ********* 2025-07-12 15:45:35.586489 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:45:35.586496 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:45:35.586503 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:45:35.586510 | orchestrator | 2025-07-12 15:45:35.586517 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-07-12 15:45:35.586528 | orchestrator | Saturday 12 July 2025 15:42:39 +0000 (0:00:02.035) 0:03:22.849 ********* 2025-07-12 15:45:35.586535 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.586542 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.586549 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.586556 | orchestrator | 2025-07-12 15:45:35.586562 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-07-12 15:45:35.586570 | orchestrator | Saturday 12 July 2025 15:42:41 +0000 (0:00:01.420) 0:03:24.269 ********* 2025-07-12 15:45:35.586577 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.586584 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.586591 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.586598 | orchestrator | 2025-07-12 15:45:35.586605 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-07-12 15:45:35.586612 | orchestrator | Saturday 12 July 2025 15:42:41 +0000 (0:00:00.317) 0:03:24.587 ********* 2025-07-12 15:45:35.586619 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:45:35.586625 | orchestrator | 2025-07-12 15:45:35.586632 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-07-12 15:45:35.586639 | orchestrator | Saturday 12 July 2025 15:42:42 +0000 (0:00:01.174) 0:03:25.761 ********* 2025-07-12 15:45:35.586651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-07-12 15:45:35.586662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-07-12 15:45:35.586670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-07-12 15:45:35.586678 | orchestrator | 2025-07-12 15:45:35.586684 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-07-12 15:45:35.586691 | orchestrator | Saturday 12 July 2025 15:42:44 +0000 (0:00:01.782) 0:03:27.544 ********* 2025-07-12 15:45:35.586702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-07-12 15:45:35.586710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-07-12 15:45:35.586717 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.586724 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.586735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-07-12 15:45:35.586743 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.586750 | orchestrator | 2025-07-12 15:45:35.586757 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-07-12 15:45:35.586764 | orchestrator | Saturday 12 July 2025 15:42:44 +0000 (0:00:00.417) 0:03:27.961 ********* 2025-07-12 15:45:35.586771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-07-12 15:45:35.586779 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.586789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-07-12 15:45:35.586795 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.586802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-07-12 15:45:35.586808 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.586814 | orchestrator | 2025-07-12 15:45:35.586820 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-07-12 15:45:35.586830 | orchestrator | Saturday 12 July 2025 15:42:45 +0000 (0:00:00.579) 0:03:28.541 ********* 2025-07-12 15:45:35.586836 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.586842 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.586848 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.586854 | orchestrator | 2025-07-12 15:45:35.586860 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-07-12 15:45:35.586866 | orchestrator | Saturday 12 July 2025 15:42:46 +0000 (0:00:00.732) 0:03:29.273 ********* 2025-07-12 15:45:35.586872 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.586878 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.586884 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.586890 | orchestrator | 2025-07-12 15:45:35.586896 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-07-12 15:45:35.586902 | orchestrator | Saturday 12 July 2025 15:42:47 +0000 (0:00:01.267) 0:03:30.540 ********* 2025-07-12 15:45:35.586908 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.586915 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.586921 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.586927 | orchestrator | 2025-07-12 15:45:35.586933 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-07-12 15:45:35.586939 | orchestrator | Saturday 12 July 2025 15:42:47 +0000 (0:00:00.311) 0:03:30.852 ********* 2025-07-12 15:45:35.586945 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:45:35.586951 | orchestrator | 2025-07-12 15:45:35.586994 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-07-12 15:45:35.587002 | orchestrator | Saturday 12 July 2025 15:42:49 +0000 (0:00:01.356) 0:03:32.209 ********* 2025-07-12 15:45:35.587008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 15:45:35.587020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.587030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 15:45:35.587042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.587049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.587055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.587066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-12 15:45:35.587076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.587088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.587095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.587101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 15:45:35.587108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-12 15:45:35.587118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 15:45:35.587125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.587139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.587146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 15:45:35.587152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 15:45:35.587159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 15:45:35.587165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.587175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.587192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-12 15:45:35.587199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 15:45:35.587205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 15:45:35.587212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.587219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.587229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-12 15:45:35.587245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-12 15:45:35.587252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-12 15:45:35.587259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 15:45:35.587265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.587272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.587281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-12 15:45:35.587296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-12 15:45:35.587303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.587309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 15:45:35.587316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.587326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.587339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.587346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-12 15:45:35.587352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.587359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 15:45:35.587365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 15:45:35.587474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.587489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 15:45:35.587499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.587506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-12 15:45:35.587513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 15:45:35.587519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.587564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-12 15:45:35.587577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-12 15:45:35.587586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.587592 | orchestrator | 2025-07-12 15:45:35.587597 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-07-12 15:45:35.587603 | orchestrator | Saturday 12 July 2025 15:42:53 +0000 (0:00:04.377) 0:03:36.586 ********* 2025-07-12 15:45:35.587609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 15:45:35.587614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.587661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.587669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.587677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-12 15:45:35.587683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.587689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 15:45:35.587695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 15:45:35.587740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.587748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 15:45:35.587759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.587764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.587770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 15:45:35.587776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.587821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.587831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-12 15:45:35.587837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-12 15:45:35.587843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.587849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 15:45:35.587855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 15:45:35.587864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.587925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 15:45:35.587937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-12 15:45:35.587944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.587949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 15:45:35.587974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-12 15:45:35.588025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.588034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.588039 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.588049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-12 15:45:35.588055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 15:45:35.588061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 15:45:35.588071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.588113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.588125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-12 15:45:35.588131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.588137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-12 15:45:35.588147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.588196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.588205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-12 15:45:35.588210 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.588216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.588222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 15:45:35.588261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 15:45:35.588279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.588285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 15:45:35.588308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.588317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-12 15:45:35.588323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 15:45:35.588329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.588338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-12 15:45:35.588344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-12 15:45:35.588374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.588381 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.588387 | orchestrator | 2025-07-12 15:45:35.588392 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-07-12 15:45:35.588398 | orchestrator | Saturday 12 July 2025 15:42:55 +0000 (0:00:01.534) 0:03:38.121 ********* 2025-07-12 15:45:35.588403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-07-12 15:45:35.588413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-07-12 15:45:35.588419 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.588424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-07-12 15:45:35.588430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-07-12 15:45:35.588435 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.588440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-07-12 15:45:35.588449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-07-12 15:45:35.588455 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.588460 | orchestrator | 2025-07-12 15:45:35.588465 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-07-12 15:45:35.588471 | orchestrator | Saturday 12 July 2025 15:42:57 +0000 (0:00:02.253) 0:03:40.374 ********* 2025-07-12 15:45:35.588476 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:45:35.588481 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:45:35.588487 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:45:35.588492 | orchestrator | 2025-07-12 15:45:35.588497 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-07-12 15:45:35.588503 | orchestrator | Saturday 12 July 2025 15:42:58 +0000 (0:00:01.278) 0:03:41.653 ********* 2025-07-12 15:45:35.588508 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:45:35.588513 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:45:35.588518 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:45:35.588524 | orchestrator | 2025-07-12 15:45:35.588529 | orchestrator | TASK [include_role : placement] ************************************************ 2025-07-12 15:45:35.588535 | orchestrator | Saturday 12 July 2025 15:43:00 +0000 (0:00:02.101) 0:03:43.754 ********* 2025-07-12 15:45:35.588543 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:45:35.588551 | orchestrator | 2025-07-12 15:45:35.588559 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-07-12 15:45:35.588568 | orchestrator | Saturday 12 July 2025 15:43:01 +0000 (0:00:01.201) 0:03:44.955 ********* 2025-07-12 15:45:35.588578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 15:45:35.588622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 15:45:35.588640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 15:45:35.588656 | orchestrator | 2025-07-12 15:45:35.588665 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-07-12 15:45:35.588675 | orchestrator | Saturday 12 July 2025 15:43:05 +0000 (0:00:03.405) 0:03:48.361 ********* 2025-07-12 15:45:35.588683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 15:45:35.588689 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.588695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 15:45:35.588700 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.588724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 15:45:35.588731 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.588737 | orchestrator | 2025-07-12 15:45:35.588789 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-07-12 15:45:35.588795 | orchestrator | Saturday 12 July 2025 15:43:05 +0000 (0:00:00.493) 0:03:48.855 ********* 2025-07-12 15:45:35.588810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-12 15:45:35.588817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-12 15:45:35.588822 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.588828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-12 15:45:35.588834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-12 15:45:35.588839 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.588845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-12 15:45:35.588850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-12 15:45:35.588856 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.588861 | orchestrator | 2025-07-12 15:45:35.588866 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-07-12 15:45:35.588872 | orchestrator | Saturday 12 July 2025 15:43:06 +0000 (0:00:00.754) 0:03:49.609 ********* 2025-07-12 15:45:35.588877 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:45:35.588883 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:45:35.588888 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:45:35.588894 | orchestrator | 2025-07-12 15:45:35.588899 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-07-12 15:45:35.588904 | orchestrator | Saturday 12 July 2025 15:43:08 +0000 (0:00:01.656) 0:03:51.266 ********* 2025-07-12 15:45:35.588910 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:45:35.588915 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:45:35.588920 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:45:35.588926 | orchestrator | 2025-07-12 15:45:35.588932 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-07-12 15:45:35.588937 | orchestrator | Saturday 12 July 2025 15:43:10 +0000 (0:00:02.193) 0:03:53.459 ********* 2025-07-12 15:45:35.588942 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:45:35.588948 | orchestrator | 2025-07-12 15:45:35.588953 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-07-12 15:45:35.589000 | orchestrator | Saturday 12 July 2025 15:43:11 +0000 (0:00:01.293) 0:03:54.753 ********* 2025-07-12 15:45:35.589025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 15:45:35.589038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.589047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.589053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 15:45:35.589059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.589065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.589093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 15:45:35.589101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.589107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.589113 | orchestrator | 2025-07-12 15:45:35.589118 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-07-12 15:45:35.589124 | orchestrator | Saturday 12 July 2025 15:43:16 +0000 (0:00:04.812) 0:03:59.565 ********* 2025-07-12 15:45:35.589130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 15:45:35.589156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.589163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.589171 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.589177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 15:45:35.589183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.589189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.589198 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.589219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 15:45:35.589229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.589235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.589241 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.589246 | orchestrator | 2025-07-12 15:45:35.589252 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-07-12 15:45:35.589257 | orchestrator | Saturday 12 July 2025 15:43:17 +0000 (0:00:00.984) 0:04:00.549 ********* 2025-07-12 15:45:35.589263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-12 15:45:35.589268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-12 15:45:35.589273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-12 15:45:35.589278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-12 15:45:35.589286 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.589291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-12 15:45:35.589296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-12 15:45:35.589301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-12 15:45:35.589306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-12 15:45:35.589311 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.589329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-12 15:45:35.589334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-12 15:45:35.589339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-12 15:45:35.589344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-12 15:45:35.589349 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.589354 | orchestrator | 2025-07-12 15:45:35.589359 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-07-12 15:45:35.589364 | orchestrator | Saturday 12 July 2025 15:43:18 +0000 (0:00:00.907) 0:04:01.457 ********* 2025-07-12 15:45:35.589368 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:45:35.589375 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:45:35.589380 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:45:35.589385 | orchestrator | 2025-07-12 15:45:35.589390 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-07-12 15:45:35.589394 | orchestrator | Saturday 12 July 2025 15:43:20 +0000 (0:00:01.671) 0:04:03.128 ********* 2025-07-12 15:45:35.589399 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:45:35.589404 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:45:35.589409 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:45:35.589414 | orchestrator | 2025-07-12 15:45:35.589418 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-07-12 15:45:35.589423 | orchestrator | Saturday 12 July 2025 15:43:22 +0000 (0:00:02.175) 0:04:05.303 ********* 2025-07-12 15:45:35.589428 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:45:35.589432 | orchestrator | 2025-07-12 15:45:35.589437 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-07-12 15:45:35.589442 | orchestrator | Saturday 12 July 2025 15:43:23 +0000 (0:00:01.556) 0:04:06.860 ********* 2025-07-12 15:45:35.589447 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-07-12 15:45:35.589452 | orchestrator | 2025-07-12 15:45:35.589456 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-07-12 15:45:35.589461 | orchestrator | Saturday 12 July 2025 15:43:25 +0000 (0:00:01.176) 0:04:08.036 ********* 2025-07-12 15:45:35.589470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-07-12 15:45:35.589475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-07-12 15:45:35.589480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-07-12 15:45:35.589485 | orchestrator | 2025-07-12 15:45:35.589490 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-07-12 15:45:35.589495 | orchestrator | Saturday 12 July 2025 15:43:29 +0000 (0:00:04.466) 0:04:12.503 ********* 2025-07-12 15:45:35.589513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 15:45:35.589519 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.589524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 15:45:35.589529 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.589536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 15:45:35.589541 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.589546 | orchestrator | 2025-07-12 15:45:35.589551 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-07-12 15:45:35.589556 | orchestrator | Saturday 12 July 2025 15:43:30 +0000 (0:00:01.413) 0:04:13.917 ********* 2025-07-12 15:45:35.589560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-12 15:45:35.589568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-12 15:45:35.589574 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.589579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-12 15:45:35.589584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-12 15:45:35.589589 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.589594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-12 15:45:35.589599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-12 15:45:35.589603 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.589608 | orchestrator | 2025-07-12 15:45:35.589613 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-07-12 15:45:35.589617 | orchestrator | Saturday 12 July 2025 15:43:33 +0000 (0:00:02.186) 0:04:16.103 ********* 2025-07-12 15:45:35.589622 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:45:35.589627 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:45:35.589632 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:45:35.589636 | orchestrator | 2025-07-12 15:45:35.589641 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-07-12 15:45:35.589646 | orchestrator | Saturday 12 July 2025 15:43:35 +0000 (0:00:02.340) 0:04:18.444 ********* 2025-07-12 15:45:35.589650 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:45:35.589655 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:45:35.589660 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:45:35.589665 | orchestrator | 2025-07-12 15:45:35.589669 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-07-12 15:45:35.589674 | orchestrator | Saturday 12 July 2025 15:43:38 +0000 (0:00:03.088) 0:04:21.533 ********* 2025-07-12 15:45:35.589679 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-07-12 15:45:35.589684 | orchestrator | 2025-07-12 15:45:35.589688 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-07-12 15:45:35.589706 | orchestrator | Saturday 12 July 2025 15:43:39 +0000 (0:00:00.804) 0:04:22.337 ********* 2025-07-12 15:45:35.589711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 15:45:35.589717 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.589724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 15:45:35.589732 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.589737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 15:45:35.589742 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.589747 | orchestrator | 2025-07-12 15:45:35.589752 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-07-12 15:45:35.589756 | orchestrator | Saturday 12 July 2025 15:43:40 +0000 (0:00:01.382) 0:04:23.720 ********* 2025-07-12 15:45:35.589761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 15:45:35.589766 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.589771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 15:45:35.589776 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.589781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 15:45:35.589786 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.589790 | orchestrator | 2025-07-12 15:45:35.589795 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-07-12 15:45:35.589800 | orchestrator | Saturday 12 July 2025 15:43:42 +0000 (0:00:01.729) 0:04:25.450 ********* 2025-07-12 15:45:35.589804 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.589809 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.589814 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.589819 | orchestrator | 2025-07-12 15:45:35.589823 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-07-12 15:45:35.589840 | orchestrator | Saturday 12 July 2025 15:43:43 +0000 (0:00:01.240) 0:04:26.690 ********* 2025-07-12 15:45:35.589846 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:45:35.589854 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:45:35.589859 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:45:35.589864 | orchestrator | 2025-07-12 15:45:35.589868 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-07-12 15:45:35.589873 | orchestrator | Saturday 12 July 2025 15:43:46 +0000 (0:00:02.349) 0:04:29.040 ********* 2025-07-12 15:45:35.589878 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:45:35.589883 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:45:35.589887 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:45:35.589892 | orchestrator | 2025-07-12 15:45:35.589897 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-07-12 15:45:35.589901 | orchestrator | Saturday 12 July 2025 15:43:49 +0000 (0:00:02.990) 0:04:32.031 ********* 2025-07-12 15:45:35.589906 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-07-12 15:45:35.589911 | orchestrator | 2025-07-12 15:45:35.589916 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-07-12 15:45:35.589921 | orchestrator | Saturday 12 July 2025 15:43:50 +0000 (0:00:01.064) 0:04:33.095 ********* 2025-07-12 15:45:35.589928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-12 15:45:35.589933 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.589938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-12 15:45:35.589943 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.589948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-12 15:45:35.589953 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.589973 | orchestrator | 2025-07-12 15:45:35.589978 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-07-12 15:45:35.589983 | orchestrator | Saturday 12 July 2025 15:43:51 +0000 (0:00:00.991) 0:04:34.086 ********* 2025-07-12 15:45:35.590050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-12 15:45:35.590056 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.590061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-12 15:45:35.590070 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.590095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-12 15:45:35.590123 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.590129 | orchestrator | 2025-07-12 15:45:35.590134 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-07-12 15:45:35.590139 | orchestrator | Saturday 12 July 2025 15:43:52 +0000 (0:00:01.262) 0:04:35.348 ********* 2025-07-12 15:45:35.590144 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.590148 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.590153 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.590158 | orchestrator | 2025-07-12 15:45:35.590162 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-07-12 15:45:35.590167 | orchestrator | Saturday 12 July 2025 15:43:54 +0000 (0:00:01.779) 0:04:37.128 ********* 2025-07-12 15:45:35.590172 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:45:35.590177 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:45:35.590181 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:45:35.590186 | orchestrator | 2025-07-12 15:45:35.590191 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-07-12 15:45:35.590199 | orchestrator | Saturday 12 July 2025 15:43:56 +0000 (0:00:02.434) 0:04:39.562 ********* 2025-07-12 15:45:35.590204 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:45:35.590208 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:45:35.590213 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:45:35.590218 | orchestrator | 2025-07-12 15:45:35.590222 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-07-12 15:45:35.590227 | orchestrator | Saturday 12 July 2025 15:43:59 +0000 (0:00:03.054) 0:04:42.616 ********* 2025-07-12 15:45:35.590232 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:45:35.590236 | orchestrator | 2025-07-12 15:45:35.590241 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-07-12 15:45:35.590246 | orchestrator | Saturday 12 July 2025 15:44:00 +0000 (0:00:01.320) 0:04:43.937 ********* 2025-07-12 15:45:35.590251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 15:45:35.590262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 15:45:35.590267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 15:45:35.590290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 15:45:35.590299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 15:45:35.590304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 15:45:35.590309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 15:45:35.590318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 15:45:35.590323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.590342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 15:45:35.590348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 15:45:35.590356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 15:45:35.590361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 15:45:35.590370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.590375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.590380 | orchestrator | 2025-07-12 15:45:35.590385 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-07-12 15:45:35.590390 | orchestrator | Saturday 12 July 2025 15:44:04 +0000 (0:00:03.658) 0:04:47.595 ********* 2025-07-12 15:45:35.590410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 15:45:35.590416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 15:45:35.590423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 15:45:35.590429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 15:45:35.590437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.590442 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.590448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 15:45:35.590467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 15:45:35.590476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 15:45:35.590481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 15:45:35.590486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 15:45:35.590495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 15:45:35.590500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 15:45:35.590520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 15:45:35.590526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.590531 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.590538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 15:45:35.590543 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.590548 | orchestrator | 2025-07-12 15:45:35.590553 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-07-12 15:45:35.590561 | orchestrator | Saturday 12 July 2025 15:44:05 +0000 (0:00:00.716) 0:04:48.311 ********* 2025-07-12 15:45:35.590566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-12 15:45:35.590572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-12 15:45:35.590576 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.590581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-12 15:45:35.590586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-12 15:45:35.590591 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.590596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-12 15:45:35.590600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-12 15:45:35.590605 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.590610 | orchestrator | 2025-07-12 15:45:35.590615 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-07-12 15:45:35.590619 | orchestrator | Saturday 12 July 2025 15:44:06 +0000 (0:00:00.872) 0:04:49.183 ********* 2025-07-12 15:45:35.590624 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:45:35.590629 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:45:35.590634 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:45:35.590638 | orchestrator | 2025-07-12 15:45:35.590643 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-07-12 15:45:35.590648 | orchestrator | Saturday 12 July 2025 15:44:07 +0000 (0:00:01.728) 0:04:50.912 ********* 2025-07-12 15:45:35.590653 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:45:35.590657 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:45:35.590662 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:45:35.590667 | orchestrator | 2025-07-12 15:45:35.590671 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-07-12 15:45:35.590676 | orchestrator | Saturday 12 July 2025 15:44:10 +0000 (0:00:02.076) 0:04:52.988 ********* 2025-07-12 15:45:35.590681 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:45:35.590685 | orchestrator | 2025-07-12 15:45:35.590690 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-07-12 15:45:35.590695 | orchestrator | Saturday 12 July 2025 15:44:11 +0000 (0:00:01.303) 0:04:54.291 ********* 2025-07-12 15:45:35.590714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 15:45:35.590726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 15:45:35.590731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 15:45:35.590737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 15:45:35.590757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 15:45:35.590770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 15:45:35.590775 | orchestrator | 2025-07-12 15:45:35.590780 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-07-12 15:45:35.590785 | orchestrator | Saturday 12 July 2025 15:44:16 +0000 (0:00:05.446) 0:04:59.738 ********* 2025-07-12 15:45:35.590790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 15:45:35.590795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 15:45:35.590800 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.590820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 15:45:35.590833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 15:45:35.590839 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.590844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 15:45:35.590849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 15:45:35.590854 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.590859 | orchestrator | 2025-07-12 15:45:35.590864 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-07-12 15:45:35.590869 | orchestrator | Saturday 12 July 2025 15:44:17 +0000 (0:00:00.779) 0:05:00.518 ********* 2025-07-12 15:45:35.590874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-07-12 15:45:35.590892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-12 15:45:35.590901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-12 15:45:35.590906 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.590911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-07-12 15:45:35.590916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-12 15:45:35.590923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-12 15:45:35.590928 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.590933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-07-12 15:45:35.590938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-12 15:45:35.590943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-12 15:45:35.590948 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.590952 | orchestrator | 2025-07-12 15:45:35.590970 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-07-12 15:45:35.590975 | orchestrator | Saturday 12 July 2025 15:44:18 +0000 (0:00:00.774) 0:05:01.292 ********* 2025-07-12 15:45:35.590980 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.590985 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.590990 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.590994 | orchestrator | 2025-07-12 15:45:35.590999 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-07-12 15:45:35.591004 | orchestrator | Saturday 12 July 2025 15:44:18 +0000 (0:00:00.449) 0:05:01.741 ********* 2025-07-12 15:45:35.591009 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.591013 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.591018 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.591023 | orchestrator | 2025-07-12 15:45:35.591027 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-07-12 15:45:35.591032 | orchestrator | Saturday 12 July 2025 15:44:19 +0000 (0:00:01.155) 0:05:02.897 ********* 2025-07-12 15:45:35.591037 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:45:35.591042 | orchestrator | 2025-07-12 15:45:35.591046 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-07-12 15:45:35.591051 | orchestrator | Saturday 12 July 2025 15:44:21 +0000 (0:00:01.489) 0:05:04.387 ********* 2025-07-12 15:45:35.591056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-12 15:45:35.591065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 15:45:35.591084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:45:35.591090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:45:35.591098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-12 15:45:35.591103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 15:45:35.591108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 15:45:35.591113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:45:35.591122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:45:35.591142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 15:45:35.591150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-12 15:45:35.591155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 15:45:35.591160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:45:35.591165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:45:35.591170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 15:45:35.591181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-12 15:45:35.591189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-12 15:45:35.591194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-12 15:45:35.591200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:45:35.591208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-12 15:45:35.591213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:45:35.591222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:45:35.591227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 15:45:35.591235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:45:35.591240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 15:45:35.591245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-12 15:45:35.591253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-12 15:45:35.591261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:45:35.591266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:45:35.591274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 15:45:35.591279 | orchestrator | 2025-07-12 15:45:35.591284 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-07-12 15:45:35.591289 | orchestrator | Saturday 12 July 2025 15:44:25 +0000 (0:00:04.140) 0:05:08.527 ********* 2025-07-12 15:45:35.591294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-12 15:45:35.591307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 15:45:35.591312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:45:35.591317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:45:35.591325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 15:45:35.591335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-12 15:45:35.591340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-12 15:45:35.591350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:45:35.591355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:45:35.591360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-12 15:45:35.591368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 15:45:35.591373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 15:45:35.591378 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.591383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:45:35.591389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:45:35.591397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 15:45:35.591402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-12 15:45:35.591440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-12 15:45:35.591452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:45:35.591460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:45:35.591465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 15:45:35.591474 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.591479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-12 15:45:35.591484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 15:45:35.591489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:45:35.591497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:45:35.591502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 15:45:35.591510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-12 15:45:35.591519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-12 15:45:35.591524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:45:35.591529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:45:35.591536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 15:45:35.591541 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.591550 | orchestrator | 2025-07-12 15:45:35.591554 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-07-12 15:45:35.591559 | orchestrator | Saturday 12 July 2025 15:44:26 +0000 (0:00:01.262) 0:05:09.790 ********* 2025-07-12 15:45:35.591564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-07-12 15:45:35.591569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-07-12 15:45:35.591577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-12 15:45:35.591585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-12 15:45:35.591598 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.591606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-07-12 15:45:35.591613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-07-12 15:45:35.591621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-07-12 15:45:35.591629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-07-12 15:45:35.591638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-12 15:45:35.591647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-12 15:45:35.591655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-12 15:45:35.591663 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.591670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-12 15:45:35.591675 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.591680 | orchestrator | 2025-07-12 15:45:35.591685 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-07-12 15:45:35.591689 | orchestrator | Saturday 12 July 2025 15:44:27 +0000 (0:00:00.934) 0:05:10.724 ********* 2025-07-12 15:45:35.591694 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.591699 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.591703 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.591708 | orchestrator | 2025-07-12 15:45:35.591713 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-07-12 15:45:35.591718 | orchestrator | Saturday 12 July 2025 15:44:28 +0000 (0:00:00.431) 0:05:11.155 ********* 2025-07-12 15:45:35.591725 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.591730 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.591735 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.591740 | orchestrator | 2025-07-12 15:45:35.591744 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-07-12 15:45:35.591749 | orchestrator | Saturday 12 July 2025 15:44:29 +0000 (0:00:01.392) 0:05:12.548 ********* 2025-07-12 15:45:35.591754 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:45:35.591763 | orchestrator | 2025-07-12 15:45:35.591768 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-07-12 15:45:35.591773 | orchestrator | Saturday 12 July 2025 15:44:31 +0000 (0:00:01.689) 0:05:14.237 ********* 2025-07-12 15:45:35.591781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 15:45:35.591786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 15:45:35.591792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 15:45:35.591797 | orchestrator | 2025-07-12 15:45:35.591802 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-07-12 15:45:35.591807 | orchestrator | Saturday 12 July 2025 15:44:33 +0000 (0:00:02.640) 0:05:16.877 ********* 2025-07-12 15:45:35.591814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-07-12 15:45:35.591831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-07-12 15:45:35.591837 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.591842 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.591847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-07-12 15:45:35.591852 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.591857 | orchestrator | 2025-07-12 15:45:35.591861 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-07-12 15:45:35.591866 | orchestrator | Saturday 12 July 2025 15:44:34 +0000 (0:00:00.374) 0:05:17.251 ********* 2025-07-12 15:45:35.591871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-07-12 15:45:35.591876 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.591881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-07-12 15:45:35.591885 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.591890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-07-12 15:45:35.591895 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.591900 | orchestrator | 2025-07-12 15:45:35.591905 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-07-12 15:45:35.591909 | orchestrator | Saturday 12 July 2025 15:44:35 +0000 (0:00:01.006) 0:05:18.258 ********* 2025-07-12 15:45:35.591914 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.591923 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.591927 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.591932 | orchestrator | 2025-07-12 15:45:35.591937 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-07-12 15:45:35.591941 | orchestrator | Saturday 12 July 2025 15:44:35 +0000 (0:00:00.442) 0:05:18.701 ********* 2025-07-12 15:45:35.591946 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.591951 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.591956 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.591974 | orchestrator | 2025-07-12 15:45:35.591979 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-07-12 15:45:35.591986 | orchestrator | Saturday 12 July 2025 15:44:37 +0000 (0:00:01.359) 0:05:20.061 ********* 2025-07-12 15:45:35.591991 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:45:35.591996 | orchestrator | 2025-07-12 15:45:35.592000 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-07-12 15:45:35.592005 | orchestrator | Saturday 12 July 2025 15:44:38 +0000 (0:00:01.722) 0:05:21.783 ********* 2025-07-12 15:45:35.592013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-07-12 15:45:35.592018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-07-12 15:45:35.592024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-07-12 15:45:35.592032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-07-12 15:45:35.592041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-07-12 15:45:35.592050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-07-12 15:45:35.592055 | orchestrator | 2025-07-12 15:45:35.592060 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-07-12 15:45:35.592065 | orchestrator | Saturday 12 July 2025 15:44:45 +0000 (0:00:06.460) 0:05:28.244 ********* 2025-07-12 15:45:35.592070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-07-12 15:45:35.592078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-07-12 15:45:35.592083 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.592091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-07-12 15:45:35.592098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-07-12 15:45:35.592104 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.592109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-07-12 15:45:35.592117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-07-12 15:45:35.592123 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.592127 | orchestrator | 2025-07-12 15:45:35.592132 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-07-12 15:45:35.592137 | orchestrator | Saturday 12 July 2025 15:44:45 +0000 (0:00:00.615) 0:05:28.860 ********* 2025-07-12 15:45:35.592142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-12 15:45:35.592149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-12 15:45:35.592154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-12 15:45:35.592159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-12 15:45:35.592164 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.592169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-12 15:45:35.592176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-12 15:45:35.592181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-12 15:45:35.592186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-12 15:45:35.592191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-12 15:45:35.592196 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.592201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-12 15:45:35.592206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-12 15:45:35.592213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-12 15:45:35.592218 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.592223 | orchestrator | 2025-07-12 15:45:35.592228 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-07-12 15:45:35.592233 | orchestrator | Saturday 12 July 2025 15:44:47 +0000 (0:00:01.602) 0:05:30.462 ********* 2025-07-12 15:45:35.592237 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:45:35.592242 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:45:35.592247 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:45:35.592252 | orchestrator | 2025-07-12 15:45:35.592257 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-07-12 15:45:35.592261 | orchestrator | Saturday 12 July 2025 15:44:48 +0000 (0:00:01.340) 0:05:31.803 ********* 2025-07-12 15:45:35.592266 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:45:35.592271 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:45:35.592276 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:45:35.592281 | orchestrator | 2025-07-12 15:45:35.592285 | orchestrator | TASK [include_role : swift] **************************************************** 2025-07-12 15:45:35.592290 | orchestrator | Saturday 12 July 2025 15:44:51 +0000 (0:00:02.178) 0:05:33.982 ********* 2025-07-12 15:45:35.592295 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.592300 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.592305 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.592309 | orchestrator | 2025-07-12 15:45:35.592314 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-07-12 15:45:35.592319 | orchestrator | Saturday 12 July 2025 15:44:51 +0000 (0:00:00.309) 0:05:34.291 ********* 2025-07-12 15:45:35.592324 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.592328 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.592333 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.592338 | orchestrator | 2025-07-12 15:45:35.592343 | orchestrator | TASK [include_role : trove] **************************************************** 2025-07-12 15:45:35.592347 | orchestrator | Saturday 12 July 2025 15:44:51 +0000 (0:00:00.598) 0:05:34.890 ********* 2025-07-12 15:45:35.592352 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.592357 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.592362 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.592366 | orchestrator | 2025-07-12 15:45:35.592371 | orchestrator | TASK [include_role : venus] **************************************************** 2025-07-12 15:45:35.592376 | orchestrator | Saturday 12 July 2025 15:44:52 +0000 (0:00:00.311) 0:05:35.202 ********* 2025-07-12 15:45:35.592383 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.592388 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.592393 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.592398 | orchestrator | 2025-07-12 15:45:35.592403 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-07-12 15:45:35.592408 | orchestrator | Saturday 12 July 2025 15:44:52 +0000 (0:00:00.299) 0:05:35.501 ********* 2025-07-12 15:45:35.592412 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.592417 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.592422 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.592426 | orchestrator | 2025-07-12 15:45:35.592431 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-07-12 15:45:35.592436 | orchestrator | Saturday 12 July 2025 15:44:52 +0000 (0:00:00.320) 0:05:35.821 ********* 2025-07-12 15:45:35.592440 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.592445 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.592450 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.592455 | orchestrator | 2025-07-12 15:45:35.592459 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-07-12 15:45:35.592467 | orchestrator | Saturday 12 July 2025 15:44:53 +0000 (0:00:00.869) 0:05:36.690 ********* 2025-07-12 15:45:35.592472 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:45:35.592477 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:45:35.592481 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:45:35.592486 | orchestrator | 2025-07-12 15:45:35.592491 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-07-12 15:45:35.592498 | orchestrator | Saturday 12 July 2025 15:44:54 +0000 (0:00:00.666) 0:05:37.357 ********* 2025-07-12 15:45:35.592503 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:45:35.592507 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:45:35.592512 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:45:35.592517 | orchestrator | 2025-07-12 15:45:35.592522 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-07-12 15:45:35.592526 | orchestrator | Saturday 12 July 2025 15:44:54 +0000 (0:00:00.352) 0:05:37.710 ********* 2025-07-12 15:45:35.592531 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:45:35.592536 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:45:35.592540 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:45:35.592545 | orchestrator | 2025-07-12 15:45:35.592550 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-07-12 15:45:35.592555 | orchestrator | Saturday 12 July 2025 15:44:56 +0000 (0:00:01.293) 0:05:39.003 ********* 2025-07-12 15:45:35.592559 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:45:35.592564 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:45:35.592569 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:45:35.592573 | orchestrator | 2025-07-12 15:45:35.592578 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-07-12 15:45:35.592583 | orchestrator | Saturday 12 July 2025 15:44:56 +0000 (0:00:00.902) 0:05:39.906 ********* 2025-07-12 15:45:35.592588 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:45:35.592592 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:45:35.592597 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:45:35.592602 | orchestrator | 2025-07-12 15:45:35.592606 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-07-12 15:45:35.592611 | orchestrator | Saturday 12 July 2025 15:44:57 +0000 (0:00:00.897) 0:05:40.803 ********* 2025-07-12 15:45:35.592616 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:45:35.592621 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:45:35.592625 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:45:35.592630 | orchestrator | 2025-07-12 15:45:35.592635 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-07-12 15:45:35.592640 | orchestrator | Saturday 12 July 2025 15:45:07 +0000 (0:00:09.526) 0:05:50.329 ********* 2025-07-12 15:45:35.592645 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:45:35.592649 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:45:35.592654 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:45:35.592658 | orchestrator | 2025-07-12 15:45:35.592663 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-07-12 15:45:35.592668 | orchestrator | Saturday 12 July 2025 15:45:08 +0000 (0:00:00.678) 0:05:51.008 ********* 2025-07-12 15:45:35.592673 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:45:35.592677 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:45:35.592682 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:45:35.592687 | orchestrator | 2025-07-12 15:45:35.592692 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-07-12 15:45:35.592696 | orchestrator | Saturday 12 July 2025 15:45:16 +0000 (0:00:08.650) 0:05:59.659 ********* 2025-07-12 15:45:35.592701 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:45:35.592706 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:45:35.592710 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:45:35.592715 | orchestrator | 2025-07-12 15:45:35.592720 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-07-12 15:45:35.592725 | orchestrator | Saturday 12 July 2025 15:45:20 +0000 (0:00:03.783) 0:06:03.442 ********* 2025-07-12 15:45:35.592733 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:45:35.592737 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:45:35.592742 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:45:35.592747 | orchestrator | 2025-07-12 15:45:35.592752 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-07-12 15:45:35.592756 | orchestrator | Saturday 12 July 2025 15:45:24 +0000 (0:00:04.300) 0:06:07.743 ********* 2025-07-12 15:45:35.592761 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.592766 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.592771 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.592775 | orchestrator | 2025-07-12 15:45:35.592780 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-07-12 15:45:35.592785 | orchestrator | Saturday 12 July 2025 15:45:25 +0000 (0:00:00.355) 0:06:08.098 ********* 2025-07-12 15:45:35.592789 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.592794 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.592799 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.592804 | orchestrator | 2025-07-12 15:45:35.592808 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-07-12 15:45:35.592813 | orchestrator | Saturday 12 July 2025 15:45:25 +0000 (0:00:00.728) 0:06:08.827 ********* 2025-07-12 15:45:35.592818 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.592823 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.592830 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.592835 | orchestrator | 2025-07-12 15:45:35.592839 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-07-12 15:45:35.592844 | orchestrator | Saturday 12 July 2025 15:45:26 +0000 (0:00:00.345) 0:06:09.173 ********* 2025-07-12 15:45:35.592849 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.592854 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.592858 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.592863 | orchestrator | 2025-07-12 15:45:35.592868 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-07-12 15:45:35.592872 | orchestrator | Saturday 12 July 2025 15:45:26 +0000 (0:00:00.381) 0:06:09.555 ********* 2025-07-12 15:45:35.592877 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.592882 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.592887 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.592891 | orchestrator | 2025-07-12 15:45:35.592896 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-07-12 15:45:35.592901 | orchestrator | Saturday 12 July 2025 15:45:26 +0000 (0:00:00.338) 0:06:09.893 ********* 2025-07-12 15:45:35.592905 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:45:35.592910 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:45:35.592915 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:45:35.592919 | orchestrator | 2025-07-12 15:45:35.592924 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-07-12 15:45:35.592929 | orchestrator | Saturday 12 July 2025 15:45:27 +0000 (0:00:00.768) 0:06:10.662 ********* 2025-07-12 15:45:35.592936 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:45:35.592941 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:45:35.592946 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:45:35.592950 | orchestrator | 2025-07-12 15:45:35.592955 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-07-12 15:45:35.592993 | orchestrator | Saturday 12 July 2025 15:45:32 +0000 (0:00:04.773) 0:06:15.436 ********* 2025-07-12 15:45:35.592999 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:45:35.593003 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:45:35.593008 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:45:35.593013 | orchestrator | 2025-07-12 15:45:35.593018 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:45:35.593023 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-07-12 15:45:35.593032 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-07-12 15:45:35.593037 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-07-12 15:45:35.593041 | orchestrator | 2025-07-12 15:45:35.593046 | orchestrator | 2025-07-12 15:45:35.593051 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:45:35.593056 | orchestrator | Saturday 12 July 2025 15:45:33 +0000 (0:00:00.823) 0:06:16.259 ********* 2025-07-12 15:45:35.593061 | orchestrator | =============================================================================== 2025-07-12 15:45:35.593065 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.53s 2025-07-12 15:45:35.593070 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.65s 2025-07-12 15:45:35.593075 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 7.54s 2025-07-12 15:45:35.593080 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.46s 2025-07-12 15:45:35.593084 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.45s 2025-07-12 15:45:35.593089 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.81s 2025-07-12 15:45:35.593094 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.78s 2025-07-12 15:45:35.593098 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.77s 2025-07-12 15:45:35.593103 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.48s 2025-07-12 15:45:35.593108 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.47s 2025-07-12 15:45:35.593112 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.38s 2025-07-12 15:45:35.593117 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.36s 2025-07-12 15:45:35.593122 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.30s 2025-07-12 15:45:35.593127 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 4.24s 2025-07-12 15:45:35.593131 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.23s 2025-07-12 15:45:35.593136 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.20s 2025-07-12 15:45:35.593141 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.14s 2025-07-12 15:45:35.593146 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.03s 2025-07-12 15:45:35.593150 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 3.78s 2025-07-12 15:45:35.593155 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.70s 2025-07-12 15:45:35.593160 | orchestrator | 2025-07-12 15:45:35 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:45:35.593165 | orchestrator | 2025-07-12 15:45:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:45:38.611220 | orchestrator | 2025-07-12 15:45:38 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:45:38.613816 | orchestrator | 2025-07-12 15:45:38 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:45:38.614564 | orchestrator | 2025-07-12 15:45:38 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:45:38.614832 | orchestrator | 2025-07-12 15:45:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:45:41.656286 | orchestrator | 2025-07-12 15:45:41 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:45:41.656376 | orchestrator | 2025-07-12 15:45:41 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:45:41.656412 | orchestrator | 2025-07-12 15:45:41 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:45:41.656423 | orchestrator | 2025-07-12 15:45:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:45:44.688641 | orchestrator | 2025-07-12 15:45:44 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:45:44.689534 | orchestrator | 2025-07-12 15:45:44 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:45:44.690642 | orchestrator | 2025-07-12 15:45:44 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:45:44.690675 | orchestrator | 2025-07-12 15:45:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:45:47.734871 | orchestrator | 2025-07-12 15:45:47 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:45:47.737383 | orchestrator | 2025-07-12 15:45:47 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:45:47.738168 | orchestrator | 2025-07-12 15:45:47 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:45:47.738194 | orchestrator | 2025-07-12 15:45:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:45:50.789044 | orchestrator | 2025-07-12 15:45:50 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:45:50.790199 | orchestrator | 2025-07-12 15:45:50 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:45:50.790981 | orchestrator | 2025-07-12 15:45:50 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:45:50.791008 | orchestrator | 2025-07-12 15:45:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:45:53.835002 | orchestrator | 2025-07-12 15:45:53 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:45:53.836003 | orchestrator | 2025-07-12 15:45:53 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:45:53.838135 | orchestrator | 2025-07-12 15:45:53 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:45:53.838392 | orchestrator | 2025-07-12 15:45:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:45:56.876228 | orchestrator | 2025-07-12 15:45:56 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:45:56.877800 | orchestrator | 2025-07-12 15:45:56 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:45:56.879637 | orchestrator | 2025-07-12 15:45:56 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:45:56.879828 | orchestrator | 2025-07-12 15:45:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:45:59.913469 | orchestrator | 2025-07-12 15:45:59 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:45:59.914596 | orchestrator | 2025-07-12 15:45:59 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:45:59.915319 | orchestrator | 2025-07-12 15:45:59 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:45:59.915436 | orchestrator | 2025-07-12 15:45:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:46:02.953014 | orchestrator | 2025-07-12 15:46:02 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:46:02.953607 | orchestrator | 2025-07-12 15:46:02 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:46:02.954913 | orchestrator | 2025-07-12 15:46:02 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:46:02.955038 | orchestrator | 2025-07-12 15:46:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:46:06.009273 | orchestrator | 2025-07-12 15:46:06 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:46:06.010714 | orchestrator | 2025-07-12 15:46:06 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:46:06.012146 | orchestrator | 2025-07-12 15:46:06 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:46:06.012851 | orchestrator | 2025-07-12 15:46:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:46:09.066455 | orchestrator | 2025-07-12 15:46:09 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:46:09.069085 | orchestrator | 2025-07-12 15:46:09 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:46:09.071039 | orchestrator | 2025-07-12 15:46:09 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:46:09.071089 | orchestrator | 2025-07-12 15:46:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:46:12.129508 | orchestrator | 2025-07-12 15:46:12 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:46:12.131965 | orchestrator | 2025-07-12 15:46:12 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:46:12.134544 | orchestrator | 2025-07-12 15:46:12 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:46:12.135061 | orchestrator | 2025-07-12 15:46:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:46:15.178447 | orchestrator | 2025-07-12 15:46:15 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:46:15.179447 | orchestrator | 2025-07-12 15:46:15 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:46:15.182911 | orchestrator | 2025-07-12 15:46:15 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:46:15.182991 | orchestrator | 2025-07-12 15:46:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:46:18.227913 | orchestrator | 2025-07-12 15:46:18 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:46:18.228051 | orchestrator | 2025-07-12 15:46:18 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:46:18.235280 | orchestrator | 2025-07-12 15:46:18 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:46:18.235331 | orchestrator | 2025-07-12 15:46:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:46:21.275258 | orchestrator | 2025-07-12 15:46:21 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:46:21.275358 | orchestrator | 2025-07-12 15:46:21 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:46:21.275373 | orchestrator | 2025-07-12 15:46:21 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:46:21.275385 | orchestrator | 2025-07-12 15:46:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:46:24.329829 | orchestrator | 2025-07-12 15:46:24 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:46:24.331631 | orchestrator | 2025-07-12 15:46:24 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:46:24.333985 | orchestrator | 2025-07-12 15:46:24 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:46:24.334258 | orchestrator | 2025-07-12 15:46:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:46:27.424831 | orchestrator | 2025-07-12 15:46:27 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:46:27.426124 | orchestrator | 2025-07-12 15:46:27 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:46:27.426177 | orchestrator | 2025-07-12 15:46:27 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:46:27.426191 | orchestrator | 2025-07-12 15:46:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:46:30.478359 | orchestrator | 2025-07-12 15:46:30 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:46:30.478749 | orchestrator | 2025-07-12 15:46:30 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:46:30.479566 | orchestrator | 2025-07-12 15:46:30 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:46:30.479674 | orchestrator | 2025-07-12 15:46:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:46:33.534119 | orchestrator | 2025-07-12 15:46:33 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:46:33.535121 | orchestrator | 2025-07-12 15:46:33 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:46:33.536748 | orchestrator | 2025-07-12 15:46:33 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:46:33.536803 | orchestrator | 2025-07-12 15:46:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:46:36.594768 | orchestrator | 2025-07-12 15:46:36 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:46:36.597507 | orchestrator | 2025-07-12 15:46:36 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:46:36.599565 | orchestrator | 2025-07-12 15:46:36 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:46:36.599855 | orchestrator | 2025-07-12 15:46:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:46:39.644316 | orchestrator | 2025-07-12 15:46:39 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:46:39.645317 | orchestrator | 2025-07-12 15:46:39 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:46:39.647170 | orchestrator | 2025-07-12 15:46:39 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:46:39.647390 | orchestrator | 2025-07-12 15:46:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:46:42.710143 | orchestrator | 2025-07-12 15:46:42 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:46:42.712601 | orchestrator | 2025-07-12 15:46:42 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:46:42.712654 | orchestrator | 2025-07-12 15:46:42 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:46:42.712667 | orchestrator | 2025-07-12 15:46:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:46:45.756770 | orchestrator | 2025-07-12 15:46:45 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:46:45.758250 | orchestrator | 2025-07-12 15:46:45 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:46:45.760093 | orchestrator | 2025-07-12 15:46:45 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:46:45.760134 | orchestrator | 2025-07-12 15:46:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:46:48.817604 | orchestrator | 2025-07-12 15:46:48 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:46:48.819048 | orchestrator | 2025-07-12 15:46:48 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:46:48.821644 | orchestrator | 2025-07-12 15:46:48 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:46:48.822094 | orchestrator | 2025-07-12 15:46:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:46:51.874295 | orchestrator | 2025-07-12 15:46:51 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:46:51.874782 | orchestrator | 2025-07-12 15:46:51 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:46:51.876435 | orchestrator | 2025-07-12 15:46:51 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:46:51.876468 | orchestrator | 2025-07-12 15:46:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:46:54.923321 | orchestrator | 2025-07-12 15:46:54 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:46:54.925927 | orchestrator | 2025-07-12 15:46:54 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:46:54.928307 | orchestrator | 2025-07-12 15:46:54 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:46:54.928690 | orchestrator | 2025-07-12 15:46:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:46:57.980007 | orchestrator | 2025-07-12 15:46:57 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:46:57.980107 | orchestrator | 2025-07-12 15:46:57 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:46:57.980849 | orchestrator | 2025-07-12 15:46:57 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:46:57.981101 | orchestrator | 2025-07-12 15:46:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:47:01.028334 | orchestrator | 2025-07-12 15:47:01 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:47:01.029331 | orchestrator | 2025-07-12 15:47:01 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:47:01.030700 | orchestrator | 2025-07-12 15:47:01 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:47:01.030734 | orchestrator | 2025-07-12 15:47:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:47:04.078589 | orchestrator | 2025-07-12 15:47:04 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:47:04.079613 | orchestrator | 2025-07-12 15:47:04 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:47:04.081179 | orchestrator | 2025-07-12 15:47:04 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:47:04.081412 | orchestrator | 2025-07-12 15:47:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:47:07.128596 | orchestrator | 2025-07-12 15:47:07 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:47:07.129763 | orchestrator | 2025-07-12 15:47:07 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:47:07.131109 | orchestrator | 2025-07-12 15:47:07 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:47:07.131144 | orchestrator | 2025-07-12 15:47:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:47:10.175498 | orchestrator | 2025-07-12 15:47:10 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:47:10.176941 | orchestrator | 2025-07-12 15:47:10 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:47:10.178003 | orchestrator | 2025-07-12 15:47:10 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:47:10.178081 | orchestrator | 2025-07-12 15:47:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:47:13.228306 | orchestrator | 2025-07-12 15:47:13 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:47:13.230105 | orchestrator | 2025-07-12 15:47:13 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:47:13.231609 | orchestrator | 2025-07-12 15:47:13 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:47:13.231646 | orchestrator | 2025-07-12 15:47:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:47:16.305794 | orchestrator | 2025-07-12 15:47:16 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:47:16.306692 | orchestrator | 2025-07-12 15:47:16 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:47:16.308481 | orchestrator | 2025-07-12 15:47:16 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:47:16.309077 | orchestrator | 2025-07-12 15:47:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:47:19.353224 | orchestrator | 2025-07-12 15:47:19 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:47:19.354972 | orchestrator | 2025-07-12 15:47:19 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:47:19.356599 | orchestrator | 2025-07-12 15:47:19 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:47:19.356668 | orchestrator | 2025-07-12 15:47:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:47:22.397365 | orchestrator | 2025-07-12 15:47:22 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:47:22.401102 | orchestrator | 2025-07-12 15:47:22 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:47:22.401730 | orchestrator | 2025-07-12 15:47:22 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:47:22.401753 | orchestrator | 2025-07-12 15:47:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:47:25.452704 | orchestrator | 2025-07-12 15:47:25 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:47:25.454413 | orchestrator | 2025-07-12 15:47:25 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:47:25.456826 | orchestrator | 2025-07-12 15:47:25 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:47:25.457119 | orchestrator | 2025-07-12 15:47:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:47:28.506287 | orchestrator | 2025-07-12 15:47:28 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:47:28.507447 | orchestrator | 2025-07-12 15:47:28 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:47:28.508453 | orchestrator | 2025-07-12 15:47:28 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:47:28.508909 | orchestrator | 2025-07-12 15:47:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:47:31.553197 | orchestrator | 2025-07-12 15:47:31 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:47:31.554519 | orchestrator | 2025-07-12 15:47:31 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:47:31.556507 | orchestrator | 2025-07-12 15:47:31 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:47:31.556694 | orchestrator | 2025-07-12 15:47:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:47:34.608345 | orchestrator | 2025-07-12 15:47:34 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:47:34.611187 | orchestrator | 2025-07-12 15:47:34 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:47:34.613032 | orchestrator | 2025-07-12 15:47:34 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:47:34.613442 | orchestrator | 2025-07-12 15:47:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:47:37.660971 | orchestrator | 2025-07-12 15:47:37 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:47:37.661877 | orchestrator | 2025-07-12 15:47:37 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:47:37.666133 | orchestrator | 2025-07-12 15:47:37 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:47:37.666376 | orchestrator | 2025-07-12 15:47:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:47:40.713988 | orchestrator | 2025-07-12 15:47:40 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:47:40.715948 | orchestrator | 2025-07-12 15:47:40 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:47:40.717019 | orchestrator | 2025-07-12 15:47:40 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state STARTED 2025-07-12 15:47:40.717303 | orchestrator | 2025-07-12 15:47:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:47:43.757896 | orchestrator | 2025-07-12 15:47:43 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:47:43.759276 | orchestrator | 2025-07-12 15:47:43 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:47:43.772662 | orchestrator | 2025-07-12 15:47:43.772717 | orchestrator | 2025-07-12 15:47:43.772731 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-07-12 15:47:43.772743 | orchestrator | 2025-07-12 15:47:43.772754 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-07-12 15:47:43.772765 | orchestrator | Saturday 12 July 2025 15:36:39 +0000 (0:00:00.745) 0:00:00.745 ********* 2025-07-12 15:47:43.772777 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:47:43.772803 | orchestrator | 2025-07-12 15:47:43.772814 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-07-12 15:47:43.772825 | orchestrator | Saturday 12 July 2025 15:36:40 +0000 (0:00:01.223) 0:00:01.968 ********* 2025-07-12 15:47:43.772835 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.772883 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.772904 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.772923 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.772941 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.772957 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.772968 | orchestrator | 2025-07-12 15:47:43.772978 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-07-12 15:47:43.772989 | orchestrator | Saturday 12 July 2025 15:36:41 +0000 (0:00:01.564) 0:00:03.532 ********* 2025-07-12 15:47:43.773000 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.773010 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.773021 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.773055 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.773066 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.773077 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.773087 | orchestrator | 2025-07-12 15:47:43.773098 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-07-12 15:47:43.773108 | orchestrator | Saturday 12 July 2025 15:36:42 +0000 (0:00:00.909) 0:00:04.441 ********* 2025-07-12 15:47:43.773119 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.773129 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.773140 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.773150 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.773160 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.773171 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.773181 | orchestrator | 2025-07-12 15:47:43.773192 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-07-12 15:47:43.773203 | orchestrator | Saturday 12 July 2025 15:36:43 +0000 (0:00:00.995) 0:00:05.437 ********* 2025-07-12 15:47:43.773213 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.773224 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.773234 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.773244 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.773257 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.773269 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.773280 | orchestrator | 2025-07-12 15:47:43.773292 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-07-12 15:47:43.773305 | orchestrator | Saturday 12 July 2025 15:36:44 +0000 (0:00:00.763) 0:00:06.201 ********* 2025-07-12 15:47:43.773317 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.773329 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.773340 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.773351 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.773363 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.773375 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.773386 | orchestrator | 2025-07-12 15:47:43.773399 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-07-12 15:47:43.773411 | orchestrator | Saturday 12 July 2025 15:36:45 +0000 (0:00:00.608) 0:00:06.810 ********* 2025-07-12 15:47:43.773423 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.773434 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.773446 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.773458 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.773470 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.773482 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.773494 | orchestrator | 2025-07-12 15:47:43.773506 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-07-12 15:47:43.773530 | orchestrator | Saturday 12 July 2025 15:36:46 +0000 (0:00:00.823) 0:00:07.633 ********* 2025-07-12 15:47:43.773543 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.773555 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.773567 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.773579 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.773592 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.773604 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.773616 | orchestrator | 2025-07-12 15:47:43.773644 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-07-12 15:47:43.773661 | orchestrator | Saturday 12 July 2025 15:36:46 +0000 (0:00:00.781) 0:00:08.415 ********* 2025-07-12 15:47:43.773681 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.773692 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.773702 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.773713 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.773723 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.773741 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.773751 | orchestrator | 2025-07-12 15:47:43.773762 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-07-12 15:47:43.773773 | orchestrator | Saturday 12 July 2025 15:36:47 +0000 (0:00:00.972) 0:00:09.388 ********* 2025-07-12 15:47:43.773792 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 15:47:43.773804 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 15:47:43.773814 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 15:47:43.773825 | orchestrator | 2025-07-12 15:47:43.773836 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-07-12 15:47:43.773863 | orchestrator | Saturday 12 July 2025 15:36:48 +0000 (0:00:00.693) 0:00:10.081 ********* 2025-07-12 15:47:43.773875 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.773885 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.773896 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.773906 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.773917 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.773927 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.773938 | orchestrator | 2025-07-12 15:47:43.773963 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-07-12 15:47:43.773974 | orchestrator | Saturday 12 July 2025 15:36:49 +0000 (0:00:00.925) 0:00:11.007 ********* 2025-07-12 15:47:43.773985 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 15:47:43.773996 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 15:47:43.774006 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 15:47:43.774065 | orchestrator | 2025-07-12 15:47:43.774079 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-07-12 15:47:43.774089 | orchestrator | Saturday 12 July 2025 15:36:52 +0000 (0:00:03.075) 0:00:14.082 ********* 2025-07-12 15:47:43.774100 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-12 15:47:43.774111 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-12 15:47:43.774121 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-12 15:47:43.774132 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.774143 | orchestrator | 2025-07-12 15:47:43.774159 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-07-12 15:47:43.774170 | orchestrator | Saturday 12 July 2025 15:36:53 +0000 (0:00:00.868) 0:00:14.950 ********* 2025-07-12 15:47:43.774182 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.774195 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.774206 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.774232 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.774243 | orchestrator | 2025-07-12 15:47:43.774260 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-07-12 15:47:43.774271 | orchestrator | Saturday 12 July 2025 15:36:54 +0000 (0:00:00.904) 0:00:15.855 ********* 2025-07-12 15:47:43.774283 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.774302 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.774321 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.774332 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.774343 | orchestrator | 2025-07-12 15:47:43.774354 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-07-12 15:47:43.774365 | orchestrator | Saturday 12 July 2025 15:36:54 +0000 (0:00:00.337) 0:00:16.192 ********* 2025-07-12 15:47:43.774377 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-07-12 15:36:49.974026', 'end': '2025-07-12 15:36:50.286897', 'delta': '0:00:00.312871', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.774399 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-07-12 15:36:51.095627', 'end': '2025-07-12 15:36:51.406074', 'delta': '0:00:00.310447', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.774411 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-07-12 15:36:52.044965', 'end': '2025-07-12 15:36:52.338652', 'delta': '0:00:00.293687', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.774422 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.774433 | orchestrator | 2025-07-12 15:47:43.774444 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-07-12 15:47:43.774455 | orchestrator | Saturday 12 July 2025 15:36:54 +0000 (0:00:00.162) 0:00:16.355 ********* 2025-07-12 15:47:43.774466 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.774476 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.774487 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.774497 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.774508 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.774518 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.774535 | orchestrator | 2025-07-12 15:47:43.774546 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-07-12 15:47:43.774556 | orchestrator | Saturday 12 July 2025 15:36:56 +0000 (0:00:01.217) 0:00:17.573 ********* 2025-07-12 15:47:43.774567 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.774578 | orchestrator | 2025-07-12 15:47:43.774588 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-07-12 15:47:43.774599 | orchestrator | Saturday 12 July 2025 15:36:56 +0000 (0:00:00.828) 0:00:18.401 ********* 2025-07-12 15:47:43.774610 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.774620 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.774631 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.774641 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.774652 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.774663 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.774673 | orchestrator | 2025-07-12 15:47:43.774684 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-07-12 15:47:43.774700 | orchestrator | Saturday 12 July 2025 15:36:58 +0000 (0:00:01.484) 0:00:19.886 ********* 2025-07-12 15:47:43.774711 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.774721 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.774736 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.774747 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.774757 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.774774 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.774785 | orchestrator | 2025-07-12 15:47:43.774795 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-07-12 15:47:43.774806 | orchestrator | Saturday 12 July 2025 15:36:59 +0000 (0:00:00.983) 0:00:20.869 ********* 2025-07-12 15:47:43.774821 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.774832 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.774873 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.774885 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.774901 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.774912 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.774922 | orchestrator | 2025-07-12 15:47:43.774933 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-07-12 15:47:43.774944 | orchestrator | Saturday 12 July 2025 15:37:00 +0000 (0:00:00.904) 0:00:21.774 ********* 2025-07-12 15:47:43.774955 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.774965 | orchestrator | 2025-07-12 15:47:43.774988 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-07-12 15:47:43.774999 | orchestrator | Saturday 12 July 2025 15:37:00 +0000 (0:00:00.101) 0:00:21.875 ********* 2025-07-12 15:47:43.775010 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.775021 | orchestrator | 2025-07-12 15:47:43.775031 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-07-12 15:47:43.775042 | orchestrator | Saturday 12 July 2025 15:37:00 +0000 (0:00:00.244) 0:00:22.119 ********* 2025-07-12 15:47:43.775053 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.775063 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.775074 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.775084 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.775095 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.775111 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.775122 | orchestrator | 2025-07-12 15:47:43.775133 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-07-12 15:47:43.775149 | orchestrator | Saturday 12 July 2025 15:37:01 +0000 (0:00:00.849) 0:00:22.969 ********* 2025-07-12 15:47:43.775160 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.775170 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.775181 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.775191 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.775209 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.775219 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.775230 | orchestrator | 2025-07-12 15:47:43.775240 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-07-12 15:47:43.775251 | orchestrator | Saturday 12 July 2025 15:37:02 +0000 (0:00:00.968) 0:00:23.938 ********* 2025-07-12 15:47:43.775262 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.775272 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.775283 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.775294 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.775304 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.775314 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.775325 | orchestrator | 2025-07-12 15:47:43.775335 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-07-12 15:47:43.775346 | orchestrator | Saturday 12 July 2025 15:37:03 +0000 (0:00:01.118) 0:00:25.056 ********* 2025-07-12 15:47:43.775357 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.775367 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.775378 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.775388 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.775399 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.775409 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.775420 | orchestrator | 2025-07-12 15:47:43.775431 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-07-12 15:47:43.775441 | orchestrator | Saturday 12 July 2025 15:37:04 +0000 (0:00:01.226) 0:00:26.283 ********* 2025-07-12 15:47:43.775458 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.775469 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.775479 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.775490 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.775508 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.775518 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.775529 | orchestrator | 2025-07-12 15:47:43.775539 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-07-12 15:47:43.775550 | orchestrator | Saturday 12 July 2025 15:37:05 +0000 (0:00:00.671) 0:00:26.954 ********* 2025-07-12 15:47:43.775561 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.775571 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.775581 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.775592 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.775602 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.775613 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.775623 | orchestrator | 2025-07-12 15:47:43.775634 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-07-12 15:47:43.775645 | orchestrator | Saturday 12 July 2025 15:37:06 +0000 (0:00:00.616) 0:00:27.571 ********* 2025-07-12 15:47:43.775655 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.775666 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.775680 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.775691 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.775701 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.775718 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.775729 | orchestrator | 2025-07-12 15:47:43.775739 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-07-12 15:47:43.775750 | orchestrator | Saturday 12 July 2025 15:37:06 +0000 (0:00:00.631) 0:00:28.202 ********* 2025-07-12 15:47:43.775766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.775785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.775797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.775808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.775833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.775889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.775903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.775914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.775933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3e1d17b-8112-49c7-87d4-1e73815fd43e', 'scsi-SQEMU_QEMU_HARDDISK_f3e1d17b-8112-49c7-87d4-1e73815fd43e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3e1d17b-8112-49c7-87d4-1e73815fd43e-part1', 'scsi-SQEMU_QEMU_HARDDISK_f3e1d17b-8112-49c7-87d4-1e73815fd43e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3e1d17b-8112-49c7-87d4-1e73815fd43e-part14', 'scsi-SQEMU_QEMU_HARDDISK_f3e1d17b-8112-49c7-87d4-1e73815fd43e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3e1d17b-8112-49c7-87d4-1e73815fd43e-part15', 'scsi-SQEMU_QEMU_HARDDISK_f3e1d17b-8112-49c7-87d4-1e73815fd43e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3e1d17b-8112-49c7-87d4-1e73815fd43e-part16', 'scsi-SQEMU_QEMU_HARDDISK_f3e1d17b-8112-49c7-87d4-1e73815fd43e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 15:47:43.775963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.775976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-14-52-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 15:47:43.775989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776091 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.776117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b361a598-1b86-4f22-9f34-916651b9c093', 'scsi-SQEMU_QEMU_HARDDISK_b361a598-1b86-4f22-9f34-916651b9c093'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b361a598-1b86-4f22-9f34-916651b9c093-part1', 'scsi-SQEMU_QEMU_HARDDISK_b361a598-1b86-4f22-9f34-916651b9c093-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b361a598-1b86-4f22-9f34-916651b9c093-part14', 'scsi-SQEMU_QEMU_HARDDISK_b361a598-1b86-4f22-9f34-916651b9c093-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b361a598-1b86-4f22-9f34-916651b9c093-part15', 'scsi-SQEMU_QEMU_HARDDISK_b361a598-1b86-4f22-9f34-916651b9c093-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b361a598-1b86-4f22-9f34-916651b9c093-part16', 'scsi-SQEMU_QEMU_HARDDISK_b361a598-1b86-4f22-9f34-916651b9c093-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 15:47:43.776132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-14-52-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 15:47:43.776160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e870d793-04c8-4d31-a748-bbae651abfd8', 'scsi-SQEMU_QEMU_HARDDISK_e870d793-04c8-4d31-a748-bbae651abfd8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e870d793-04c8-4d31-a748-bbae651abfd8-part1', 'scsi-SQEMU_QEMU_HARDDISK_e870d793-04c8-4d31-a748-bbae651abfd8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e870d793-04c8-4d31-a748-bbae651abfd8-part14', 'scsi-SQEMU_QEMU_HARDDISK_e870d793-04c8-4d31-a748-bbae651abfd8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e870d793-04c8-4d31-a748-bbae651abfd8-part15', 'scsi-SQEMU_QEMU_HARDDISK_e870d793-04c8-4d31-a748-bbae651abfd8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e870d793-04c8-4d31-a748-bbae651abfd8-part16', 'scsi-SQEMU_QEMU_HARDDISK_e870d793-04c8-4d31-a748-bbae651abfd8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 15:47:43.776287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-14-52-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 15:47:43.776299 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.776311 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0c0189bb--8103--55ae--95fc--ac60d34dc15f-osd--block--0c0189bb--8103--55ae--95fc--ac60d34dc15f', 'dm-uuid-LVM-tf720NRkUyPSvEBWzFdYzrzVAVv12n3Ctx3WNdW8l0E21IRHNT0pJMf31Czyjp3L'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776324 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2608adc8--8e22--540f--a74d--9f1d5d1ddc4f-osd--block--2608adc8--8e22--540f--a74d--9f1d5d1ddc4f', 'dm-uuid-LVM-TlTe1Avr2uKAcYFGEozdZjlJBbzRj5RtcV3spMZ5fndkYcs4g3hs93vJZjrIHT9b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776335 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776346 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776364 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776379 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776391 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776402 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.776413 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776429 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776440 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776454 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b', 'scsi-SQEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b-part1', 'scsi-SQEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b-part14', 'scsi-SQEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b-part15', 'scsi-SQEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b-part16', 'scsi-SQEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 15:47:43.776471 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--0c0189bb--8103--55ae--95fc--ac60d34dc15f-osd--block--0c0189bb--8103--55ae--95fc--ac60d34dc15f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jyl4Kj-iZOl-sy7q-Pq72-HD7M-gIjU-dg1WiH', 'scsi-0QEMU_QEMU_HARDDISK_c6699afa-886d-4139-8698-8a8fafe98984', 'scsi-SQEMU_QEMU_HARDDISK_c6699afa-886d-4139-8698-8a8fafe98984'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 15:47:43.776493 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2608adc8--8e22--540f--a74d--9f1d5d1ddc4f-osd--block--2608adc8--8e22--540f--a74d--9f1d5d1ddc4f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-s6i02u-ktCZ-MuCo-rpun-X43h-5Be3-TQShRX', 'scsi-0QEMU_QEMU_HARDDISK_4e5b43f9-5557-4a03-9895-8e671249b5b2', 'scsi-SQEMU_QEMU_HARDDISK_4e5b43f9-5557-4a03-9895-8e671249b5b2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 15:47:43.776505 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0aec1d56-840e-4d62-87fc-8ad42993ed21', 'scsi-SQEMU_QEMU_HARDDISK_0aec1d56-840e-4d62-87fc-8ad42993ed21'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 15:47:43.776515 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-14-52-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 15:47:43.776536 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ed518422--90c3--5ab9--913f--91d667874e9d-osd--block--ed518422--90c3--5ab9--913f--91d667874e9d', 'dm-uuid-LVM-XVmadN0mqQ2oHtzAhxUE6pN3WTcrFBP0WnjWT8Hxg8AFRWeEheH4oiqNL1GeIsoM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776556 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--66e431f6--efaf--5b66--8dd9--edbf314ce410-osd--block--66e431f6--efaf--5b66--8dd9--edbf314ce410', 'dm-uuid-LVM-X7Q43GJC6NOnI6uN1nufyrfG9fHQSD9jrK39rmFAu4UvCyjKPGT499811uPfawyh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776566 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776576 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776601 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776616 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776626 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776636 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776651 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776661 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.776689 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd', 'scsi-SQEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 15:47:43.776701 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ed518422--90c3--5ab9--913f--91d667874e9d-osd--block--ed518422--90c3--5ab9--913f--91d667874e9d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XhgY2L-dNwu-Wjve-oZCH-eyUb-VpDX-4pdae2', 'scsi-0QEMU_QEMU_HARDDISK_9415964e-ba41-448d-be5c-d5fc92ddea3f', 'scsi-SQEMU_QEMU_HARDDISK_9415964e-ba41-448d-be5c-d5fc92ddea3f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 15:47:43.776712 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--66e431f6--efaf--5b66--8dd9--edbf314ce410-osd--block--66e431f6--efaf--5b66--8dd9--edbf314ce410'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-N4rQjE-Lh8a-mzut-ehOW-vGJw-81If-Fbu8pa', 'scsi-0QEMU_QEMU_HARDDISK_df26c144-7e2c-487c-9e8f-effdfe3555dd', 'scsi-SQEMU_QEMU_HARDDISK_df26c144-7e2c-487c-9e8f-effdfe3555dd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 15:47:43.776728 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80301f58-6d09-4d29-bcb1-b411833d1e96', 'scsi-SQEMU_QEMU_HARDDISK_80301f58-6d09-4d29-bcb1-b411833d1e96'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 15:47:43.776741 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-14-52-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 15:47:43.776752 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.776762 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--98eaa118--ceae--5fd7--911b--5a5c065fb5e7-osd--block--98eaa118--ceae--5fd7--911b--5a5c065fb5e7', 'dm-uuid-LVM-I64y3JwzPT8m2omvdUM4ThksJnVVo5jdKhE5B1OA4VTYgglcCz6olKyaXoO2aiaq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776772 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d3106c13--92fd--5dcd--ba4d--74ce9f77b023-osd--block--d3106c13--92fd--5dcd--ba4d--74ce9f77b023', 'dm-uuid-LVM-iQcQMh1cncewEXXEaxf144lrXeIlB3JcF6MDxTVlUyqUBwh1ozHrVMrJKwQhsLk3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776787 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776801 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776812 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776831 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776863 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776883 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776911 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776926 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:47:43.776954 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e', 'scsi-SQEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e-part1', 'scsi-SQEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e-part14', 'scsi-SQEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e-part15', 'scsi-SQEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e-part16', 'scsi-SQEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 15:47:43.776981 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--98eaa118--ceae--5fd7--911b--5a5c065fb5e7-osd--block--98eaa118--ceae--5fd7--911b--5a5c065fb5e7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ALQMmF-hxLg-dfN1-POEx-XGkM-suB0-m6rHC3', 'scsi-0QEMU_QEMU_HARDDISK_6698acfe-c205-405d-be66-12c19a56960d', 'scsi-SQEMU_QEMU_HARDDISK_6698acfe-c205-405d-be66-12c19a56960d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 15:47:43.777003 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d3106c13--92fd--5dcd--ba4d--74ce9f77b023-osd--block--d3106c13--92fd--5dcd--ba4d--74ce9f77b023'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GQ7H0t-n3DY-Urch-Q632-9o6L-oJBd-RuffH9', 'scsi-0QEMU_QEMU_HARDDISK_2d047699-b504-4740-af1d-648b929835be', 'scsi-SQEMU_QEMU_HARDDISK_2d047699-b504-4740-af1d-648b929835be'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 15:47:43.777022 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2bb8cb1-296e-41d9-9659-79f1ba9bca2a', 'scsi-SQEMU_QEMU_HARDDISK_e2bb8cb1-296e-41d9-9659-79f1ba9bca2a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 15:47:43.777039 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-14-52-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 15:47:43.777056 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.777066 | orchestrator | 2025-07-12 15:47:43.777076 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-07-12 15:47:43.777086 | orchestrator | Saturday 12 July 2025 15:37:08 +0000 (0:00:01.673) 0:00:29.876 ********* 2025-07-12 15:47:43.777096 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.777112 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.777122 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_g2025-07-12 15:47:43 | INFO  | Task 0fbe89c5-3ffc-4129-a6bb-4b680b1f59cb is in state SUCCESS 2025-07-12 15:47:43.777141 | orchestrator | roup_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.777160 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.777171 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.777181 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.777197 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.777212 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.777224 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3e1d17b-8112-49c7-87d4-1e73815fd43e', 'scsi-SQEMU_QEMU_HARDDISK_f3e1d17b-8112-49c7-87d4-1e73815fd43e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3e1d17b-8112-49c7-87d4-1e73815fd43e-part1', 'scsi-SQEMU_QEMU_HARDDISK_f3e1d17b-8112-49c7-87d4-1e73815fd43e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3e1d17b-8112-49c7-87d4-1e73815fd43e-part14', 'scsi-SQEMU_QEMU_HARDDISK_f3e1d17b-8112-49c7-87d4-1e73815fd43e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3e1d17b-8112-49c7-87d4-1e73815fd43e-part15', 'scsi-SQEMU_QEMU_HARDDISK_f3e1d17b-8112-49c7-87d4-1e73815fd43e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3e1d17b-8112-49c7-87d4-1e73815fd43e-part16', 'scsi-SQEMU_QEMU_HARDDISK_f3e1d17b-8112-49c7-87d4-1e73815fd43e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.777240 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-14-52-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.777263 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.777273 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.777283 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.777315 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.777330 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.777340 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.777356 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.777373 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.777383 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.777399 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b361a598-1b86-4f22-9f34-916651b9c093', 'scsi-SQEMU_QEMU_HARDDISK_b361a598-1b86-4f22-9f34-916651b9c093'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b361a598-1b86-4f22-9f34-916651b9c093-part1', 'scsi-SQEMU_QEMU_HARDDISK_b361a598-1b86-4f22-9f34-916651b9c093-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b361a598-1b86-4f22-9f34-916651b9c093-part14', 'scsi-SQEMU_QEMU_HARDDISK_b361a598-1b86-4f22-9f34-916651b9c093-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b361a598-1b86-4f22-9f34-916651b9c093-part15', 'scsi-SQEMU_QEMU_HARDDISK_b361a598-1b86-4f22-9f34-916651b9c093-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b361a598-1b86-4f22-9f34-916651b9c093-part16', 'scsi-SQEMU_QEMU_HARDDISK_b361a598-1b86-4f22-9f34-916651b9c093-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.777410 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-14-52-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.777431 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.777441 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778168 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778247 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778277 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778290 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778320 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778332 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778376 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e870d793-04c8-4d31-a748-bbae651abfd8', 'scsi-SQEMU_QEMU_HARDDISK_e870d793-04c8-4d31-a748-bbae651abfd8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e870d793-04c8-4d31-a748-bbae651abfd8-part1', 'scsi-SQEMU_QEMU_HARDDISK_e870d793-04c8-4d31-a748-bbae651abfd8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e870d793-04c8-4d31-a748-bbae651abfd8-part14', 'scsi-SQEMU_QEMU_HARDDISK_e870d793-04c8-4d31-a748-bbae651abfd8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e870d793-04c8-4d31-a748-bbae651abfd8-part15', 'scsi-SQEMU_QEMU_HARDDISK_e870d793-04c8-4d31-a748-bbae651abfd8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e870d793-04c8-4d31-a748-bbae651abfd8-part16', 'scsi-SQEMU_QEMU_HARDDISK_e870d793-04c8-4d31-a748-bbae651abfd8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778392 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-14-52-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778412 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.778425 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0c0189bb--8103--55ae--95fc--ac60d34dc15f-osd--block--0c0189bb--8103--55ae--95fc--ac60d34dc15f', 'dm-uuid-LVM-tf720NRkUyPSvEBWzFdYzrzVAVv12n3Ctx3WNdW8l0E21IRHNT0pJMf31Czyjp3L'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778438 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2608adc8--8e22--540f--a74d--9f1d5d1ddc4f-osd--block--2608adc8--8e22--540f--a74d--9f1d5d1ddc4f', 'dm-uuid-LVM-TlTe1Avr2uKAcYFGEozdZjlJBbzRj5RtcV3spMZ5fndkYcs4g3hs93vJZjrIHT9b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778459 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778471 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778487 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.778498 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778510 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778527 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778539 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778550 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778568 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778584 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ed518422--90c3--5ab9--913f--91d667874e9d-osd--block--ed518422--90c3--5ab9--913f--91d667874e9d', 'dm-uuid-LVM-XVmadN0mqQ2oHtzAhxUE6pN3WTcrFBP0WnjWT8Hxg8AFRWeEheH4oiqNL1GeIsoM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778596 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b', 'scsi-SQEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b-part1', 'scsi-SQEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b-part14', 'scsi-SQEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b-part15', 'scsi-SQEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b-part16', 'scsi-SQEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778621 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--66e431f6--efaf--5b66--8dd9--edbf314ce410-osd--block--66e431f6--efaf--5b66--8dd9--edbf314ce410', 'dm-uuid-LVM-X7Q43GJC6NOnI6uN1nufyrfG9fHQSD9jrK39rmFAu4UvCyjKPGT499811uPfawyh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778638 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--0c0189bb--8103--55ae--95fc--ac60d34dc15f-osd--block--0c0189bb--8103--55ae--95fc--ac60d34dc15f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jyl4Kj-iZOl-sy7q-Pq72-HD7M-gIjU-dg1WiH', 'scsi-0QEMU_QEMU_HARDDISK_c6699afa-886d-4139-8698-8a8fafe98984', 'scsi-SQEMU_QEMU_HARDDISK_c6699afa-886d-4139-8698-8a8fafe98984'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778650 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778667 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2608adc8--8e22--540f--a74d--9f1d5d1ddc4f-osd--block--2608adc8--8e22--540f--a74d--9f1d5d1ddc4f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-s6i02u-ktCZ-MuCo-rpun-X43h-5Be3-TQShRX', 'scsi-0QEMU_QEMU_HARDDISK_4e5b43f9-5557-4a03-9895-8e671249b5b2', 'scsi-SQEMU_QEMU_HARDDISK_4e5b43f9-5557-4a03-9895-8e671249b5b2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778678 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778696 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0aec1d56-840e-4d62-87fc-8ad42993ed21', 'scsi-SQEMU_QEMU_HARDDISK_0aec1d56-840e-4d62-87fc-8ad42993ed21'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778707 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778723 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-14-52-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778740 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.778752 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778763 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778774 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778785 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778803 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--98eaa118--ceae--5fd7--911b--5a5c065fb5e7-osd--block--98eaa118--ceae--5fd7--911b--5a5c065fb5e7', 'dm-uuid-LVM-I64y3JwzPT8m2omvdUM4ThksJnVVo5jdKhE5B1OA4VTYgglcCz6olKyaXoO2aiaq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778819 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778837 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d3106c13--92fd--5dcd--ba4d--74ce9f77b023-osd--block--d3106c13--92fd--5dcd--ba4d--74ce9f77b023', 'dm-uuid-LVM-iQcQMh1cncewEXXEaxf144lrXeIlB3JcF6MDxTVlUyqUBwh1ozHrVMrJKwQhsLk3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778892 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd', 'scsi-SQEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778906 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778927 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ed518422--90c3--5ab9--913f--91d667874e9d-osd--block--ed518422--90c3--5ab9--913f--91d667874e9d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XhgY2L-dNwu-Wjve-oZCH-eyUb-VpDX-4pdae2', 'scsi-0QEMU_QEMU_HARDDISK_9415964e-ba41-448d-be5c-d5fc92ddea3f', 'scsi-SQEMU_QEMU_HARDDISK_9415964e-ba41-448d-be5c-d5fc92ddea3f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778946 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778958 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--66e431f6--efaf--5b66--8dd9--edbf314ce410-osd--block--66e431f6--efaf--5b66--8dd9--edbf314ce410'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-N4rQjE-Lh8a-mzut-ehOW-vGJw-81If-Fbu8pa', 'scsi-0QEMU_QEMU_HARDDISK_df26c144-7e2c-487c-9e8f-effdfe3555dd', 'scsi-SQEMU_QEMU_HARDDISK_df26c144-7e2c-487c-9e8f-effdfe3555dd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778969 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.778987 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80301f58-6d09-4d29-bcb1-b411833d1e96', 'scsi-SQEMU_QEMU_HARDDISK_80301f58-6d09-4d29-bcb1-b411833d1e96'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.779010 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.779022 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.779034 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.779045 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-14-52-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.779056 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.779067 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.779085 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.779102 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e', 'scsi-SQEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e-part1', 'scsi-SQEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e-part14', 'scsi-SQEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e-part15', 'scsi-SQEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e-part16', 'scsi-SQEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.779122 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--98eaa118--ceae--5fd7--911b--5a5c065fb5e7-osd--block--98eaa118--ceae--5fd7--911b--5a5c065fb5e7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ALQMmF-hxLg-dfN1-POEx-XGkM-suB0-m6rHC3', 'scsi-0QEMU_QEMU_HARDDISK_6698acfe-c205-405d-be66-12c19a56960d', 'scsi-SQEMU_QEMU_HARDDISK_6698acfe-c205-405d-be66-12c19a56960d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.779140 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d3106c13--92fd--5dcd--ba4d--74ce9f77b023-osd--block--d3106c13--92fd--5dcd--ba4d--74ce9f77b023'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GQ7H0t-n3DY-Urch-Q632-9o6L-oJBd-RuffH9', 'scsi-0QEMU_QEMU_HARDDISK_2d047699-b504-4740-af1d-648b929835be', 'scsi-SQEMU_QEMU_HARDDISK_2d047699-b504-4740-af1d-648b929835be'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.779163 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2bb8cb1-296e-41d9-9659-79f1ba9bca2a', 'scsi-SQEMU_QEMU_HARDDISK_e2bb8cb1-296e-41d9-9659-79f1ba9bca2a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.779175 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-14-52-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:47:43.779186 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.779197 | orchestrator | 2025-07-12 15:47:43.779209 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-07-12 15:47:43.779220 | orchestrator | Saturday 12 July 2025 15:37:10 +0000 (0:00:01.910) 0:00:31.786 ********* 2025-07-12 15:47:43.779231 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.779242 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.779253 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.779263 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.779274 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.779284 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.779295 | orchestrator | 2025-07-12 15:47:43.779306 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-07-12 15:47:43.779317 | orchestrator | Saturday 12 July 2025 15:37:11 +0000 (0:00:01.260) 0:00:33.046 ********* 2025-07-12 15:47:43.779328 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.779339 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.779349 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.779360 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.779370 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.779381 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.779392 | orchestrator | 2025-07-12 15:47:43.779403 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-07-12 15:47:43.779414 | orchestrator | Saturday 12 July 2025 15:37:12 +0000 (0:00:00.844) 0:00:33.890 ********* 2025-07-12 15:47:43.779425 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.779436 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.779447 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.779458 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.779469 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.779479 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.779490 | orchestrator | 2025-07-12 15:47:43.779501 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-07-12 15:47:43.779512 | orchestrator | Saturday 12 July 2025 15:37:13 +0000 (0:00:01.336) 0:00:35.227 ********* 2025-07-12 15:47:43.779522 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.779533 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.779550 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.779560 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.779571 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.779582 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.779592 | orchestrator | 2025-07-12 15:47:43.779603 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-07-12 15:47:43.779614 | orchestrator | Saturday 12 July 2025 15:37:14 +0000 (0:00:00.881) 0:00:36.109 ********* 2025-07-12 15:47:43.779625 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.779635 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.779646 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.779657 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.779667 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.779678 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.779689 | orchestrator | 2025-07-12 15:47:43.779705 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-07-12 15:47:43.779717 | orchestrator | Saturday 12 July 2025 15:37:15 +0000 (0:00:00.857) 0:00:36.966 ********* 2025-07-12 15:47:43.779728 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.779738 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.779749 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.779760 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.779770 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.779781 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.779792 | orchestrator | 2025-07-12 15:47:43.779803 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-07-12 15:47:43.779813 | orchestrator | Saturday 12 July 2025 15:37:16 +0000 (0:00:00.822) 0:00:37.788 ********* 2025-07-12 15:47:43.779824 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 15:47:43.779835 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-07-12 15:47:43.779871 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-07-12 15:47:43.779883 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-07-12 15:47:43.779894 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-07-12 15:47:43.779905 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-07-12 15:47:43.779920 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-07-12 15:47:43.779931 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-07-12 15:47:43.779942 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-07-12 15:47:43.779952 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-07-12 15:47:43.779963 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-07-12 15:47:43.779974 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-07-12 15:47:43.779984 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-07-12 15:47:43.779995 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-07-12 15:47:43.780006 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-07-12 15:47:43.780017 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-07-12 15:47:43.780028 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-07-12 15:47:43.780038 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-07-12 15:47:43.780049 | orchestrator | 2025-07-12 15:47:43.780060 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-07-12 15:47:43.780070 | orchestrator | Saturday 12 July 2025 15:37:19 +0000 (0:00:02.837) 0:00:40.626 ********* 2025-07-12 15:47:43.780081 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-12 15:47:43.780092 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-12 15:47:43.780103 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-12 15:47:43.780114 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.780125 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-07-12 15:47:43.780135 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-07-12 15:47:43.780153 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-07-12 15:47:43.780164 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.780175 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-07-12 15:47:43.780185 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-07-12 15:47:43.780196 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-07-12 15:47:43.780207 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.780218 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-12 15:47:43.780228 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-12 15:47:43.780239 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-12 15:47:43.780250 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.780260 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-07-12 15:47:43.780271 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-07-12 15:47:43.780282 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-07-12 15:47:43.780292 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.780303 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-07-12 15:47:43.780314 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-07-12 15:47:43.780324 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-07-12 15:47:43.780335 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.780346 | orchestrator | 2025-07-12 15:47:43.780357 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-07-12 15:47:43.780368 | orchestrator | Saturday 12 July 2025 15:37:19 +0000 (0:00:00.772) 0:00:41.399 ********* 2025-07-12 15:47:43.780379 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.780390 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.780401 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.780412 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:47:43.780423 | orchestrator | 2025-07-12 15:47:43.780434 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-07-12 15:47:43.780445 | orchestrator | Saturday 12 July 2025 15:37:21 +0000 (0:00:01.512) 0:00:42.911 ********* 2025-07-12 15:47:43.780456 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.780467 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.780478 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.780488 | orchestrator | 2025-07-12 15:47:43.780499 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-07-12 15:47:43.780510 | orchestrator | Saturday 12 July 2025 15:37:21 +0000 (0:00:00.409) 0:00:43.320 ********* 2025-07-12 15:47:43.780521 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.780532 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.780548 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.780559 | orchestrator | 2025-07-12 15:47:43.780571 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-07-12 15:47:43.780581 | orchestrator | Saturday 12 July 2025 15:37:22 +0000 (0:00:00.527) 0:00:43.848 ********* 2025-07-12 15:47:43.780592 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.780603 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.780613 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.780624 | orchestrator | 2025-07-12 15:47:43.780635 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-07-12 15:47:43.780646 | orchestrator | Saturday 12 July 2025 15:37:22 +0000 (0:00:00.256) 0:00:44.105 ********* 2025-07-12 15:47:43.780657 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.780668 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.780679 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.780690 | orchestrator | 2025-07-12 15:47:43.780700 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-07-12 15:47:43.780717 | orchestrator | Saturday 12 July 2025 15:37:22 +0000 (0:00:00.365) 0:00:44.470 ********* 2025-07-12 15:47:43.780728 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 15:47:43.780739 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 15:47:43.780753 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 15:47:43.780764 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.780775 | orchestrator | 2025-07-12 15:47:43.780786 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-07-12 15:47:43.780796 | orchestrator | Saturday 12 July 2025 15:37:23 +0000 (0:00:00.301) 0:00:44.771 ********* 2025-07-12 15:47:43.780807 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 15:47:43.780818 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 15:47:43.780829 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 15:47:43.780839 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.780864 | orchestrator | 2025-07-12 15:47:43.780876 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-07-12 15:47:43.780887 | orchestrator | Saturday 12 July 2025 15:37:23 +0000 (0:00:00.339) 0:00:45.111 ********* 2025-07-12 15:47:43.780897 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 15:47:43.780908 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 15:47:43.780919 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 15:47:43.780929 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.780940 | orchestrator | 2025-07-12 15:47:43.780951 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-07-12 15:47:43.780962 | orchestrator | Saturday 12 July 2025 15:37:24 +0000 (0:00:00.575) 0:00:45.686 ********* 2025-07-12 15:47:43.780973 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.780983 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.780994 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.781005 | orchestrator | 2025-07-12 15:47:43.781016 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-07-12 15:47:43.781026 | orchestrator | Saturday 12 July 2025 15:37:24 +0000 (0:00:00.396) 0:00:46.083 ********* 2025-07-12 15:47:43.781037 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-07-12 15:47:43.781048 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-07-12 15:47:43.781058 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-07-12 15:47:43.781069 | orchestrator | 2025-07-12 15:47:43.781080 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-07-12 15:47:43.781091 | orchestrator | Saturday 12 July 2025 15:37:24 +0000 (0:00:00.430) 0:00:46.514 ********* 2025-07-12 15:47:43.781101 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 15:47:43.781112 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 15:47:43.781123 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 15:47:43.781134 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-07-12 15:47:43.781144 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-07-12 15:47:43.781155 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-07-12 15:47:43.781166 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-07-12 15:47:43.781177 | orchestrator | 2025-07-12 15:47:43.781187 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-07-12 15:47:43.781198 | orchestrator | Saturday 12 July 2025 15:37:25 +0000 (0:00:00.796) 0:00:47.310 ********* 2025-07-12 15:47:43.781209 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 15:47:43.781220 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 15:47:43.781237 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 15:47:43.781248 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-07-12 15:47:43.781258 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-07-12 15:47:43.781269 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-07-12 15:47:43.781280 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-07-12 15:47:43.781291 | orchestrator | 2025-07-12 15:47:43.781302 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-12 15:47:43.781312 | orchestrator | Saturday 12 July 2025 15:37:27 +0000 (0:00:01.868) 0:00:49.179 ********* 2025-07-12 15:47:43.781330 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:47:43.781342 | orchestrator | 2025-07-12 15:47:43.781353 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-12 15:47:43.781364 | orchestrator | Saturday 12 July 2025 15:37:29 +0000 (0:00:01.579) 0:00:50.758 ********* 2025-07-12 15:47:43.781375 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:47:43.781386 | orchestrator | 2025-07-12 15:47:43.781397 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-12 15:47:43.781407 | orchestrator | Saturday 12 July 2025 15:37:30 +0000 (0:00:01.172) 0:00:51.931 ********* 2025-07-12 15:47:43.781418 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.781429 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.781439 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.781450 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.781461 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.781472 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.781483 | orchestrator | 2025-07-12 15:47:43.781493 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-12 15:47:43.781508 | orchestrator | Saturday 12 July 2025 15:37:31 +0000 (0:00:00.871) 0:00:52.802 ********* 2025-07-12 15:47:43.781519 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.781530 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.781541 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.781552 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.781563 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.781573 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.781584 | orchestrator | 2025-07-12 15:47:43.781595 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-12 15:47:43.781606 | orchestrator | Saturday 12 July 2025 15:37:32 +0000 (0:00:01.268) 0:00:54.071 ********* 2025-07-12 15:47:43.781617 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.781628 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.781638 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.781649 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.781660 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.781671 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.781682 | orchestrator | 2025-07-12 15:47:43.781693 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-12 15:47:43.781703 | orchestrator | Saturday 12 July 2025 15:37:34 +0000 (0:00:01.752) 0:00:55.823 ********* 2025-07-12 15:47:43.781714 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.781725 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.781735 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.781746 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.781757 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.781768 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.781789 | orchestrator | 2025-07-12 15:47:43.781801 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-12 15:47:43.781811 | orchestrator | Saturday 12 July 2025 15:37:35 +0000 (0:00:01.379) 0:00:57.202 ********* 2025-07-12 15:47:43.781822 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.781833 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.781878 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.781892 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.781903 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.781914 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.781925 | orchestrator | 2025-07-12 15:47:43.781936 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-12 15:47:43.781946 | orchestrator | Saturday 12 July 2025 15:37:36 +0000 (0:00:01.142) 0:00:58.345 ********* 2025-07-12 15:47:43.781957 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.781968 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.781979 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.781989 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.782000 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.782011 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.782058 | orchestrator | 2025-07-12 15:47:43.782070 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-12 15:47:43.782081 | orchestrator | Saturday 12 July 2025 15:37:37 +0000 (0:00:00.644) 0:00:58.990 ********* 2025-07-12 15:47:43.782092 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.782103 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.782113 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.782124 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.782135 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.782145 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.782156 | orchestrator | 2025-07-12 15:47:43.782167 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-12 15:47:43.782177 | orchestrator | Saturday 12 July 2025 15:37:38 +0000 (0:00:00.893) 0:00:59.883 ********* 2025-07-12 15:47:43.782188 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.782199 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.782209 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.782220 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.782231 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.782241 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.782252 | orchestrator | 2025-07-12 15:47:43.782263 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-12 15:47:43.782274 | orchestrator | Saturday 12 July 2025 15:37:39 +0000 (0:00:01.091) 0:01:00.975 ********* 2025-07-12 15:47:43.782284 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.782295 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.782306 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.782316 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.782327 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.782338 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.782349 | orchestrator | 2025-07-12 15:47:43.782359 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-12 15:47:43.782370 | orchestrator | Saturday 12 July 2025 15:37:40 +0000 (0:00:01.235) 0:01:02.211 ********* 2025-07-12 15:47:43.782381 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.782392 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.782403 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.782413 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.782437 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.782449 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.782459 | orchestrator | 2025-07-12 15:47:43.782470 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-12 15:47:43.782481 | orchestrator | Saturday 12 July 2025 15:37:41 +0000 (0:00:00.605) 0:01:02.816 ********* 2025-07-12 15:47:43.782492 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.782510 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.782521 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.782532 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.782542 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.782553 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.782564 | orchestrator | 2025-07-12 15:47:43.782575 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-12 15:47:43.782586 | orchestrator | Saturday 12 July 2025 15:37:42 +0000 (0:00:00.804) 0:01:03.624 ********* 2025-07-12 15:47:43.782596 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.782607 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.782618 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.782628 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.782639 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.782650 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.782661 | orchestrator | 2025-07-12 15:47:43.782671 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-12 15:47:43.782687 | orchestrator | Saturday 12 July 2025 15:37:42 +0000 (0:00:00.898) 0:01:04.522 ********* 2025-07-12 15:47:43.782699 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.782710 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.782720 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.782731 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.782742 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.782753 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.782764 | orchestrator | 2025-07-12 15:47:43.782774 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-12 15:47:43.782785 | orchestrator | Saturday 12 July 2025 15:37:43 +0000 (0:00:00.900) 0:01:05.423 ********* 2025-07-12 15:47:43.782796 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.782807 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.782817 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.782828 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.782839 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.782896 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.782908 | orchestrator | 2025-07-12 15:47:43.782919 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-12 15:47:43.782930 | orchestrator | Saturday 12 July 2025 15:37:44 +0000 (0:00:00.757) 0:01:06.181 ********* 2025-07-12 15:47:43.782941 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.782952 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.782963 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.782973 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.782984 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.782995 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.783006 | orchestrator | 2025-07-12 15:47:43.783016 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-12 15:47:43.783027 | orchestrator | Saturday 12 July 2025 15:37:45 +0000 (0:00:01.053) 0:01:07.234 ********* 2025-07-12 15:47:43.783038 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.783048 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.783059 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.783070 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.783080 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.783091 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.783102 | orchestrator | 2025-07-12 15:47:43.783112 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-12 15:47:43.783123 | orchestrator | Saturday 12 July 2025 15:37:46 +0000 (0:00:00.824) 0:01:08.059 ********* 2025-07-12 15:47:43.783134 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.783145 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.783155 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.783166 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.783176 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.783194 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.783205 | orchestrator | 2025-07-12 15:47:43.783215 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-12 15:47:43.783226 | orchestrator | Saturday 12 July 2025 15:37:47 +0000 (0:00:00.907) 0:01:08.966 ********* 2025-07-12 15:47:43.783237 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.783248 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.783258 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.783269 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.783280 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.783290 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.783301 | orchestrator | 2025-07-12 15:47:43.783312 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-12 15:47:43.783323 | orchestrator | Saturday 12 July 2025 15:37:47 +0000 (0:00:00.559) 0:01:09.525 ********* 2025-07-12 15:47:43.783334 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.783344 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.783355 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.783366 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.783376 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.783387 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.783398 | orchestrator | 2025-07-12 15:47:43.783409 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-07-12 15:47:43.783419 | orchestrator | Saturday 12 July 2025 15:37:49 +0000 (0:00:01.260) 0:01:10.786 ********* 2025-07-12 15:47:43.783430 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:47:43.783439 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:47:43.783449 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:47:43.783458 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:47:43.783468 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:47:43.783477 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:47:43.783487 | orchestrator | 2025-07-12 15:47:43.783497 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-07-12 15:47:43.783506 | orchestrator | Saturday 12 July 2025 15:37:50 +0000 (0:00:01.757) 0:01:12.544 ********* 2025-07-12 15:47:43.783516 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:47:43.783525 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:47:43.783535 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:47:43.783550 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:47:43.783561 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:47:43.783570 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:47:43.783580 | orchestrator | 2025-07-12 15:47:43.783590 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-07-12 15:47:43.783599 | orchestrator | Saturday 12 July 2025 15:37:53 +0000 (0:00:02.067) 0:01:14.611 ********* 2025-07-12 15:47:43.783609 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:47:43.783619 | orchestrator | 2025-07-12 15:47:43.783629 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-07-12 15:47:43.783638 | orchestrator | Saturday 12 July 2025 15:37:54 +0000 (0:00:01.194) 0:01:15.806 ********* 2025-07-12 15:47:43.783647 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.783657 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.783666 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.783676 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.783686 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.783695 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.783705 | orchestrator | 2025-07-12 15:47:43.783719 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-07-12 15:47:43.783729 | orchestrator | Saturday 12 July 2025 15:37:55 +0000 (0:00:00.823) 0:01:16.629 ********* 2025-07-12 15:47:43.783738 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.783748 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.783763 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.783772 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.783782 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.783791 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.783801 | orchestrator | 2025-07-12 15:47:43.783811 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-07-12 15:47:43.783820 | orchestrator | Saturday 12 July 2025 15:37:55 +0000 (0:00:00.558) 0:01:17.188 ********* 2025-07-12 15:47:43.783830 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-12 15:47:43.783839 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-12 15:47:43.783864 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-12 15:47:43.783874 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-12 15:47:43.783883 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-12 15:47:43.783893 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-12 15:47:43.783902 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-12 15:47:43.783912 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-12 15:47:43.783921 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-12 15:47:43.783931 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-12 15:47:43.783940 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-12 15:47:43.783949 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-12 15:47:43.783959 | orchestrator | 2025-07-12 15:47:43.783968 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-07-12 15:47:43.783978 | orchestrator | Saturday 12 July 2025 15:37:57 +0000 (0:00:01.504) 0:01:18.692 ********* 2025-07-12 15:47:43.783987 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:47:43.783997 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:47:43.784006 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:47:43.784016 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:47:43.784025 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:47:43.784035 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:47:43.784044 | orchestrator | 2025-07-12 15:47:43.784054 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-07-12 15:47:43.784063 | orchestrator | Saturday 12 July 2025 15:37:58 +0000 (0:00:00.882) 0:01:19.574 ********* 2025-07-12 15:47:43.784073 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.784082 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.784092 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.784101 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.784110 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.784120 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.784129 | orchestrator | 2025-07-12 15:47:43.784139 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-07-12 15:47:43.784149 | orchestrator | Saturday 12 July 2025 15:37:58 +0000 (0:00:00.798) 0:01:20.372 ********* 2025-07-12 15:47:43.784158 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.784168 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.784177 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.784187 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.784196 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.784205 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.784215 | orchestrator | 2025-07-12 15:47:43.784224 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-07-12 15:47:43.784234 | orchestrator | Saturday 12 July 2025 15:37:59 +0000 (0:00:00.554) 0:01:20.927 ********* 2025-07-12 15:47:43.784249 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.784259 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.784268 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.784277 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.784287 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.784296 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.784306 | orchestrator | 2025-07-12 15:47:43.784315 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-07-12 15:47:43.784330 | orchestrator | Saturday 12 July 2025 15:38:00 +0000 (0:00:00.861) 0:01:21.789 ********* 2025-07-12 15:47:43.784340 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:47:43.784350 | orchestrator | 2025-07-12 15:47:43.784360 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-07-12 15:47:43.784369 | orchestrator | Saturday 12 July 2025 15:38:01 +0000 (0:00:01.167) 0:01:22.956 ********* 2025-07-12 15:47:43.784379 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.784388 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.784398 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.784407 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.784417 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.784427 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.784436 | orchestrator | 2025-07-12 15:47:43.784446 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-07-12 15:47:43.784456 | orchestrator | Saturday 12 July 2025 15:39:15 +0000 (0:01:13.886) 0:02:36.842 ********* 2025-07-12 15:47:43.784465 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-12 15:47:43.784479 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-12 15:47:43.784489 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-12 15:47:43.784498 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.784508 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-12 15:47:43.784517 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-12 15:47:43.784527 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-12 15:47:43.784536 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.784546 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-12 15:47:43.784556 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-12 15:47:43.784566 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-12 15:47:43.784575 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.784585 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-12 15:47:43.784594 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-12 15:47:43.784604 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-12 15:47:43.784613 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.784623 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-12 15:47:43.784633 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-12 15:47:43.784642 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-12 15:47:43.784652 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.784662 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-12 15:47:43.784671 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-12 15:47:43.784681 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-12 15:47:43.784698 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.784708 | orchestrator | 2025-07-12 15:47:43.784717 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-07-12 15:47:43.784727 | orchestrator | Saturday 12 July 2025 15:39:16 +0000 (0:00:00.958) 0:02:37.801 ********* 2025-07-12 15:47:43.784736 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.784746 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.784756 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.784765 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.784775 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.784784 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.784794 | orchestrator | 2025-07-12 15:47:43.784803 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-07-12 15:47:43.784813 | orchestrator | Saturday 12 July 2025 15:39:16 +0000 (0:00:00.605) 0:02:38.406 ********* 2025-07-12 15:47:43.784823 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.784832 | orchestrator | 2025-07-12 15:47:43.784842 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-07-12 15:47:43.784864 | orchestrator | Saturday 12 July 2025 15:39:16 +0000 (0:00:00.135) 0:02:38.541 ********* 2025-07-12 15:47:43.784874 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.784883 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.784893 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.784902 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.784911 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.784921 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.784930 | orchestrator | 2025-07-12 15:47:43.784940 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-07-12 15:47:43.784949 | orchestrator | Saturday 12 July 2025 15:39:17 +0000 (0:00:00.857) 0:02:39.398 ********* 2025-07-12 15:47:43.784959 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.784968 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.784977 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.784987 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.784996 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.785006 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.785015 | orchestrator | 2025-07-12 15:47:43.785025 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-07-12 15:47:43.785034 | orchestrator | Saturday 12 July 2025 15:39:18 +0000 (0:00:00.666) 0:02:40.065 ********* 2025-07-12 15:47:43.785044 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.785058 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.785068 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.785078 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.785087 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.785097 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.785106 | orchestrator | 2025-07-12 15:47:43.785116 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-07-12 15:47:43.785125 | orchestrator | Saturday 12 July 2025 15:39:19 +0000 (0:00:00.829) 0:02:40.895 ********* 2025-07-12 15:47:43.785135 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.785145 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.785154 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.785164 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.785173 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.785183 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.785192 | orchestrator | 2025-07-12 15:47:43.785202 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-07-12 15:47:43.785211 | orchestrator | Saturday 12 July 2025 15:39:21 +0000 (0:00:02.490) 0:02:43.385 ********* 2025-07-12 15:47:43.785221 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.785230 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.785240 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.785254 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.785264 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.785273 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.785282 | orchestrator | 2025-07-12 15:47:43.785296 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-07-12 15:47:43.785306 | orchestrator | Saturday 12 July 2025 15:39:22 +0000 (0:00:00.735) 0:02:44.121 ********* 2025-07-12 15:47:43.785316 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:47:43.785326 | orchestrator | 2025-07-12 15:47:43.785336 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-07-12 15:47:43.785345 | orchestrator | Saturday 12 July 2025 15:39:23 +0000 (0:00:01.123) 0:02:45.244 ********* 2025-07-12 15:47:43.785355 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.785364 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.785374 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.785383 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.785393 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.785402 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.785412 | orchestrator | 2025-07-12 15:47:43.785422 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-07-12 15:47:43.785431 | orchestrator | Saturday 12 July 2025 15:39:24 +0000 (0:00:00.568) 0:02:45.813 ********* 2025-07-12 15:47:43.785441 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.785450 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.785459 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.785469 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.785478 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.785488 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.785498 | orchestrator | 2025-07-12 15:47:43.785507 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-07-12 15:47:43.785517 | orchestrator | Saturday 12 July 2025 15:39:25 +0000 (0:00:00.936) 0:02:46.750 ********* 2025-07-12 15:47:43.785526 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.785536 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.785545 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.785555 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.785564 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.785573 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.785583 | orchestrator | 2025-07-12 15:47:43.785592 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-07-12 15:47:43.785602 | orchestrator | Saturday 12 July 2025 15:39:25 +0000 (0:00:00.723) 0:02:47.474 ********* 2025-07-12 15:47:43.785612 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.785621 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.785630 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.785640 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.785649 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.785659 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.785668 | orchestrator | 2025-07-12 15:47:43.785678 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-07-12 15:47:43.785687 | orchestrator | Saturday 12 July 2025 15:39:26 +0000 (0:00:00.790) 0:02:48.264 ********* 2025-07-12 15:47:43.785697 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.785706 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.785716 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.785725 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.785734 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.785744 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.785753 | orchestrator | 2025-07-12 15:47:43.785763 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-07-12 15:47:43.785772 | orchestrator | Saturday 12 July 2025 15:39:27 +0000 (0:00:00.647) 0:02:48.911 ********* 2025-07-12 15:47:43.785787 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.785797 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.785806 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.785816 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.785825 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.785835 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.785858 | orchestrator | 2025-07-12 15:47:43.785868 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-07-12 15:47:43.785878 | orchestrator | Saturday 12 July 2025 15:39:28 +0000 (0:00:00.796) 0:02:49.707 ********* 2025-07-12 15:47:43.785887 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.785896 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.785906 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.785915 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.785924 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.785934 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.785943 | orchestrator | 2025-07-12 15:47:43.785953 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-07-12 15:47:43.785967 | orchestrator | Saturday 12 July 2025 15:39:28 +0000 (0:00:00.595) 0:02:50.302 ********* 2025-07-12 15:47:43.785977 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.785987 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.785996 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.786006 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.786046 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.786058 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.786067 | orchestrator | 2025-07-12 15:47:43.786086 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-07-12 15:47:43.786096 | orchestrator | Saturday 12 July 2025 15:39:29 +0000 (0:00:00.741) 0:02:51.044 ********* 2025-07-12 15:47:43.786106 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.786116 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.786125 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.786135 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.786144 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.786154 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.786163 | orchestrator | 2025-07-12 15:47:43.786173 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-07-12 15:47:43.786183 | orchestrator | Saturday 12 July 2025 15:39:30 +0000 (0:00:01.072) 0:02:52.117 ********* 2025-07-12 15:47:43.786197 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:47:43.786207 | orchestrator | 2025-07-12 15:47:43.786217 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-07-12 15:47:43.786227 | orchestrator | Saturday 12 July 2025 15:39:31 +0000 (0:00:00.976) 0:02:53.093 ********* 2025-07-12 15:47:43.786236 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-07-12 15:47:43.786246 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-07-12 15:47:43.786255 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-07-12 15:47:43.786265 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-07-12 15:47:43.786275 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-07-12 15:47:43.786284 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-07-12 15:47:43.786294 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-07-12 15:47:43.786303 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-07-12 15:47:43.786313 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-07-12 15:47:43.786322 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-07-12 15:47:43.786331 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-07-12 15:47:43.786341 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-07-12 15:47:43.786357 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-07-12 15:47:43.786367 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-07-12 15:47:43.786376 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-07-12 15:47:43.786386 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-07-12 15:47:43.786396 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-07-12 15:47:43.786406 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-07-12 15:47:43.786415 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-07-12 15:47:43.786425 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-07-12 15:47:43.786434 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-07-12 15:47:43.786443 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-07-12 15:47:43.786453 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-07-12 15:47:43.786462 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-07-12 15:47:43.786471 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-07-12 15:47:43.786481 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-07-12 15:47:43.786490 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-07-12 15:47:43.786500 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-07-12 15:47:43.786509 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-07-12 15:47:43.786518 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-07-12 15:47:43.786527 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-07-12 15:47:43.786537 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-07-12 15:47:43.786546 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-07-12 15:47:43.786556 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-07-12 15:47:43.786565 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-07-12 15:47:43.786574 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-07-12 15:47:43.786584 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-07-12 15:47:43.786593 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-07-12 15:47:43.786603 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-07-12 15:47:43.786612 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-07-12 15:47:43.786622 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-07-12 15:47:43.786631 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-07-12 15:47:43.786641 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-07-12 15:47:43.786651 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-07-12 15:47:43.786660 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-07-12 15:47:43.786670 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-07-12 15:47:43.786690 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-07-12 15:47:43.786700 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-12 15:47:43.786710 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-07-12 15:47:43.786719 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-12 15:47:43.786729 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-12 15:47:43.786739 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-12 15:47:43.786748 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-12 15:47:43.786758 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-12 15:47:43.786773 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-12 15:47:43.786782 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-12 15:47:43.786792 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-12 15:47:43.786801 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-12 15:47:43.786818 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-12 15:47:43.786828 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-12 15:47:43.786837 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-12 15:47:43.786883 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-12 15:47:43.786894 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-12 15:47:43.786904 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-12 15:47:43.786913 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-12 15:47:43.786923 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-12 15:47:43.786932 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-12 15:47:43.786942 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-12 15:47:43.786951 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-12 15:47:43.786960 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-12 15:47:43.786970 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-12 15:47:43.786979 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-12 15:47:43.786989 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-12 15:47:43.786998 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-12 15:47:43.787007 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-12 15:47:43.787017 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-12 15:47:43.787026 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-12 15:47:43.787036 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-12 15:47:43.787045 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-12 15:47:43.787055 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-12 15:47:43.787064 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-12 15:47:43.787073 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-12 15:47:43.787083 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-07-12 15:47:43.787092 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-12 15:47:43.787102 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-12 15:47:43.787111 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-07-12 15:47:43.787121 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-07-12 15:47:43.787130 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-07-12 15:47:43.787140 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-07-12 15:47:43.787149 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-07-12 15:47:43.787158 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-07-12 15:47:43.787168 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-07-12 15:47:43.787177 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-07-12 15:47:43.787187 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-07-12 15:47:43.787196 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-07-12 15:47:43.787212 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-07-12 15:47:43.787222 | orchestrator | 2025-07-12 15:47:43.787232 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-07-12 15:47:43.787241 | orchestrator | Saturday 12 July 2025 15:39:37 +0000 (0:00:06.232) 0:02:59.326 ********* 2025-07-12 15:47:43.787251 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.787261 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.787270 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.787279 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:47:43.787289 | orchestrator | 2025-07-12 15:47:43.787298 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-07-12 15:47:43.787310 | orchestrator | Saturday 12 July 2025 15:39:38 +0000 (0:00:00.957) 0:03:00.284 ********* 2025-07-12 15:47:43.787319 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-12 15:47:43.787327 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-12 15:47:43.787335 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-12 15:47:43.787343 | orchestrator | 2025-07-12 15:47:43.787351 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-07-12 15:47:43.787358 | orchestrator | Saturday 12 July 2025 15:39:39 +0000 (0:00:00.706) 0:03:00.991 ********* 2025-07-12 15:47:43.787366 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-12 15:47:43.787378 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-12 15:47:43.787386 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-12 15:47:43.787394 | orchestrator | 2025-07-12 15:47:43.787401 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-07-12 15:47:43.787409 | orchestrator | Saturday 12 July 2025 15:39:40 +0000 (0:00:01.437) 0:03:02.429 ********* 2025-07-12 15:47:43.787417 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.787424 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.787432 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.787440 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.787448 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.787455 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.787463 | orchestrator | 2025-07-12 15:47:43.787471 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-07-12 15:47:43.787478 | orchestrator | Saturday 12 July 2025 15:39:41 +0000 (0:00:00.654) 0:03:03.084 ********* 2025-07-12 15:47:43.787486 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.787494 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.787501 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.787509 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.787517 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.787525 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.787532 | orchestrator | 2025-07-12 15:47:43.787540 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-07-12 15:47:43.787548 | orchestrator | Saturday 12 July 2025 15:39:42 +0000 (0:00:00.842) 0:03:03.926 ********* 2025-07-12 15:47:43.787556 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.787563 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.787571 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.787579 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.787586 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.787598 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.787606 | orchestrator | 2025-07-12 15:47:43.787614 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-07-12 15:47:43.787622 | orchestrator | Saturday 12 July 2025 15:39:42 +0000 (0:00:00.568) 0:03:04.494 ********* 2025-07-12 15:47:43.787630 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.787637 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.787645 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.787653 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.787661 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.787668 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.787676 | orchestrator | 2025-07-12 15:47:43.787684 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-07-12 15:47:43.787692 | orchestrator | Saturday 12 July 2025 15:39:43 +0000 (0:00:00.898) 0:03:05.393 ********* 2025-07-12 15:47:43.787699 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.787707 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.787715 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.787722 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.787730 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.787738 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.787745 | orchestrator | 2025-07-12 15:47:43.787753 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-07-12 15:47:43.787761 | orchestrator | Saturday 12 July 2025 15:39:44 +0000 (0:00:00.677) 0:03:06.071 ********* 2025-07-12 15:47:43.787769 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.787776 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.787784 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.787791 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.787799 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.787807 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.787815 | orchestrator | 2025-07-12 15:47:43.787823 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-07-12 15:47:43.787830 | orchestrator | Saturday 12 July 2025 15:39:45 +0000 (0:00:01.038) 0:03:07.110 ********* 2025-07-12 15:47:43.787838 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.787858 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.787866 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.787874 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.787881 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.787889 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.787897 | orchestrator | 2025-07-12 15:47:43.787905 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-07-12 15:47:43.787913 | orchestrator | Saturday 12 July 2025 15:39:46 +0000 (0:00:00.551) 0:03:07.661 ********* 2025-07-12 15:47:43.787921 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.787929 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.787940 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.787949 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.787957 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.787964 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.787972 | orchestrator | 2025-07-12 15:47:43.787980 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-07-12 15:47:43.787988 | orchestrator | Saturday 12 July 2025 15:39:46 +0000 (0:00:00.680) 0:03:08.342 ********* 2025-07-12 15:47:43.787996 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.788004 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.788011 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.788019 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.788027 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.788035 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.788043 | orchestrator | 2025-07-12 15:47:43.788054 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-07-12 15:47:43.788062 | orchestrator | Saturday 12 July 2025 15:39:49 +0000 (0:00:03.046) 0:03:11.388 ********* 2025-07-12 15:47:43.788070 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.788078 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.788086 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.788093 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.788101 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.788109 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.788117 | orchestrator | 2025-07-12 15:47:43.788128 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-07-12 15:47:43.788136 | orchestrator | Saturday 12 July 2025 15:39:50 +0000 (0:00:00.788) 0:03:12.177 ********* 2025-07-12 15:47:43.788144 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.788152 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.788159 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.788168 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.788175 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.788183 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.788191 | orchestrator | 2025-07-12 15:47:43.788199 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-07-12 15:47:43.788207 | orchestrator | Saturday 12 July 2025 15:39:51 +0000 (0:00:00.626) 0:03:12.804 ********* 2025-07-12 15:47:43.788215 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.788222 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.788230 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.788238 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.788246 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.788253 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.788261 | orchestrator | 2025-07-12 15:47:43.788269 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-07-12 15:47:43.788277 | orchestrator | Saturday 12 July 2025 15:39:51 +0000 (0:00:00.742) 0:03:13.546 ********* 2025-07-12 15:47:43.788285 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.788292 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.788300 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.788308 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-12 15:47:43.788316 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-12 15:47:43.788324 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-12 15:47:43.788331 | orchestrator | 2025-07-12 15:47:43.788339 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-07-12 15:47:43.788347 | orchestrator | Saturday 12 July 2025 15:39:52 +0000 (0:00:00.655) 0:03:14.201 ********* 2025-07-12 15:47:43.788355 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.788363 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.788370 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.788379 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-07-12 15:47:43.788388 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-07-12 15:47:43.788396 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.788405 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-07-12 15:47:43.788418 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-07-12 15:47:43.788427 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.788440 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-07-12 15:47:43.788449 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-07-12 15:47:43.788457 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.788465 | orchestrator | 2025-07-12 15:47:43.788473 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-07-12 15:47:43.788481 | orchestrator | Saturday 12 July 2025 15:39:53 +0000 (0:00:00.754) 0:03:14.956 ********* 2025-07-12 15:47:43.788489 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.788497 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.788504 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.788512 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.788520 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.788530 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.788538 | orchestrator | 2025-07-12 15:47:43.788546 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-07-12 15:47:43.788554 | orchestrator | Saturday 12 July 2025 15:39:53 +0000 (0:00:00.535) 0:03:15.491 ********* 2025-07-12 15:47:43.788562 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.788570 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.788578 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.788586 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.788594 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.788601 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.788609 | orchestrator | 2025-07-12 15:47:43.788617 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-07-12 15:47:43.788625 | orchestrator | Saturday 12 July 2025 15:39:54 +0000 (0:00:00.666) 0:03:16.158 ********* 2025-07-12 15:47:43.788633 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.788641 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.788649 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.788656 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.788664 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.788672 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.788680 | orchestrator | 2025-07-12 15:47:43.788688 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-07-12 15:47:43.788696 | orchestrator | Saturday 12 July 2025 15:39:55 +0000 (0:00:00.691) 0:03:16.849 ********* 2025-07-12 15:47:43.788704 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.788711 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.788719 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.788727 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.788735 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.788742 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.788755 | orchestrator | 2025-07-12 15:47:43.788763 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-07-12 15:47:43.788771 | orchestrator | Saturday 12 July 2025 15:39:55 +0000 (0:00:00.699) 0:03:17.549 ********* 2025-07-12 15:47:43.788779 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.788786 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.788794 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.788802 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.788809 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.788817 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.788825 | orchestrator | 2025-07-12 15:47:43.788833 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-07-12 15:47:43.788841 | orchestrator | Saturday 12 July 2025 15:39:56 +0000 (0:00:00.629) 0:03:18.179 ********* 2025-07-12 15:47:43.788860 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.788868 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.788876 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.788884 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.788892 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.788899 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.788907 | orchestrator | 2025-07-12 15:47:43.788915 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-07-12 15:47:43.788923 | orchestrator | Saturday 12 July 2025 15:39:57 +0000 (0:00:00.844) 0:03:19.023 ********* 2025-07-12 15:47:43.788931 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-07-12 15:47:43.788939 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-07-12 15:47:43.788947 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-07-12 15:47:43.788955 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.788962 | orchestrator | 2025-07-12 15:47:43.788970 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-07-12 15:47:43.788978 | orchestrator | Saturday 12 July 2025 15:39:57 +0000 (0:00:00.306) 0:03:19.330 ********* 2025-07-12 15:47:43.788986 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-07-12 15:47:43.788993 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-07-12 15:47:43.789001 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-07-12 15:47:43.789009 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.789017 | orchestrator | 2025-07-12 15:47:43.789025 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-07-12 15:47:43.789033 | orchestrator | Saturday 12 July 2025 15:39:58 +0000 (0:00:00.354) 0:03:19.684 ********* 2025-07-12 15:47:43.789041 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-07-12 15:47:43.789048 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-07-12 15:47:43.789056 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-07-12 15:47:43.789069 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.789077 | orchestrator | 2025-07-12 15:47:43.789085 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-07-12 15:47:43.789093 | orchestrator | Saturday 12 July 2025 15:39:58 +0000 (0:00:00.307) 0:03:19.992 ********* 2025-07-12 15:47:43.789101 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.789108 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.789116 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.789124 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.789132 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.789140 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.789147 | orchestrator | 2025-07-12 15:47:43.789155 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-07-12 15:47:43.789163 | orchestrator | Saturday 12 July 2025 15:39:59 +0000 (0:00:00.587) 0:03:20.579 ********* 2025-07-12 15:47:43.789171 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-07-12 15:47:43.789179 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.789190 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-07-12 15:47:43.789197 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.789205 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-07-12 15:47:43.789213 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.789220 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-07-12 15:47:43.789228 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-07-12 15:47:43.789239 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-07-12 15:47:43.789247 | orchestrator | 2025-07-12 15:47:43.789255 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-07-12 15:47:43.789263 | orchestrator | Saturday 12 July 2025 15:40:00 +0000 (0:00:01.818) 0:03:22.398 ********* 2025-07-12 15:47:43.789271 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:47:43.789278 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:47:43.789286 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:47:43.789294 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:47:43.789301 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:47:43.789309 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:47:43.789317 | orchestrator | 2025-07-12 15:47:43.789325 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-12 15:47:43.789332 | orchestrator | Saturday 12 July 2025 15:40:03 +0000 (0:00:02.730) 0:03:25.128 ********* 2025-07-12 15:47:43.789340 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:47:43.789348 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:47:43.789355 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:47:43.789363 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:47:43.789371 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:47:43.789378 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:47:43.789386 | orchestrator | 2025-07-12 15:47:43.789393 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-07-12 15:47:43.789401 | orchestrator | Saturday 12 July 2025 15:40:04 +0000 (0:00:01.052) 0:03:26.181 ********* 2025-07-12 15:47:43.789409 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.789416 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.789424 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.789432 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:47:43.789440 | orchestrator | 2025-07-12 15:47:43.789448 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-07-12 15:47:43.789455 | orchestrator | Saturday 12 July 2025 15:40:05 +0000 (0:00:01.014) 0:03:27.195 ********* 2025-07-12 15:47:43.789463 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.789471 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.789479 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.789486 | orchestrator | 2025-07-12 15:47:43.789494 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-07-12 15:47:43.789502 | orchestrator | Saturday 12 July 2025 15:40:05 +0000 (0:00:00.284) 0:03:27.480 ********* 2025-07-12 15:47:43.789509 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:47:43.789517 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:47:43.789525 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:47:43.789532 | orchestrator | 2025-07-12 15:47:43.789540 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-07-12 15:47:43.789548 | orchestrator | Saturday 12 July 2025 15:40:07 +0000 (0:00:01.451) 0:03:28.932 ********* 2025-07-12 15:47:43.789556 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-12 15:47:43.789564 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-12 15:47:43.789572 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-12 15:47:43.789579 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.789587 | orchestrator | 2025-07-12 15:47:43.789595 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-07-12 15:47:43.789603 | orchestrator | Saturday 12 July 2025 15:40:08 +0000 (0:00:00.718) 0:03:29.651 ********* 2025-07-12 15:47:43.789615 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.789623 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.789631 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.789638 | orchestrator | 2025-07-12 15:47:43.789646 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-07-12 15:47:43.789654 | orchestrator | Saturday 12 July 2025 15:40:08 +0000 (0:00:00.377) 0:03:30.028 ********* 2025-07-12 15:47:43.789662 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.789669 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.789677 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.789685 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:47:43.789693 | orchestrator | 2025-07-12 15:47:43.789700 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-07-12 15:47:43.789708 | orchestrator | Saturday 12 July 2025 15:40:09 +0000 (0:00:00.718) 0:03:30.747 ********* 2025-07-12 15:47:43.789716 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 15:47:43.789724 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 15:47:43.789731 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 15:47:43.789739 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.789747 | orchestrator | 2025-07-12 15:47:43.789759 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-07-12 15:47:43.789767 | orchestrator | Saturday 12 July 2025 15:40:09 +0000 (0:00:00.354) 0:03:31.101 ********* 2025-07-12 15:47:43.789775 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.789782 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.789790 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.789798 | orchestrator | 2025-07-12 15:47:43.789806 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-07-12 15:47:43.789814 | orchestrator | Saturday 12 July 2025 15:40:09 +0000 (0:00:00.329) 0:03:31.431 ********* 2025-07-12 15:47:43.789821 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.789829 | orchestrator | 2025-07-12 15:47:43.789837 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-07-12 15:47:43.789873 | orchestrator | Saturday 12 July 2025 15:40:10 +0000 (0:00:00.195) 0:03:31.627 ********* 2025-07-12 15:47:43.789883 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.789891 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.789899 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.789906 | orchestrator | 2025-07-12 15:47:43.789914 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-07-12 15:47:43.789922 | orchestrator | Saturday 12 July 2025 15:40:10 +0000 (0:00:00.247) 0:03:31.874 ********* 2025-07-12 15:47:43.789933 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.789941 | orchestrator | 2025-07-12 15:47:43.789949 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-07-12 15:47:43.789957 | orchestrator | Saturday 12 July 2025 15:40:10 +0000 (0:00:00.252) 0:03:32.126 ********* 2025-07-12 15:47:43.789964 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.789972 | orchestrator | 2025-07-12 15:47:43.789980 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-07-12 15:47:43.789988 | orchestrator | Saturday 12 July 2025 15:40:10 +0000 (0:00:00.248) 0:03:32.374 ********* 2025-07-12 15:47:43.789995 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.790003 | orchestrator | 2025-07-12 15:47:43.790011 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-07-12 15:47:43.790104 | orchestrator | Saturday 12 July 2025 15:40:11 +0000 (0:00:00.366) 0:03:32.741 ********* 2025-07-12 15:47:43.790113 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.790121 | orchestrator | 2025-07-12 15:47:43.790129 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-07-12 15:47:43.790144 | orchestrator | Saturday 12 July 2025 15:40:11 +0000 (0:00:00.205) 0:03:32.947 ********* 2025-07-12 15:47:43.790152 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.790159 | orchestrator | 2025-07-12 15:47:43.790167 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-07-12 15:47:43.790175 | orchestrator | Saturday 12 July 2025 15:40:11 +0000 (0:00:00.216) 0:03:33.163 ********* 2025-07-12 15:47:43.790183 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 15:47:43.790191 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 15:47:43.790198 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 15:47:43.790206 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.790214 | orchestrator | 2025-07-12 15:47:43.790222 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-07-12 15:47:43.790229 | orchestrator | Saturday 12 July 2025 15:40:12 +0000 (0:00:00.431) 0:03:33.594 ********* 2025-07-12 15:47:43.790237 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.790245 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.790252 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.790260 | orchestrator | 2025-07-12 15:47:43.790268 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-07-12 15:47:43.790276 | orchestrator | Saturday 12 July 2025 15:40:12 +0000 (0:00:00.299) 0:03:33.894 ********* 2025-07-12 15:47:43.790283 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.790291 | orchestrator | 2025-07-12 15:47:43.790298 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-07-12 15:47:43.790306 | orchestrator | Saturday 12 July 2025 15:40:12 +0000 (0:00:00.163) 0:03:34.058 ********* 2025-07-12 15:47:43.790314 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.790322 | orchestrator | 2025-07-12 15:47:43.790330 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-07-12 15:47:43.790338 | orchestrator | Saturday 12 July 2025 15:40:12 +0000 (0:00:00.219) 0:03:34.277 ********* 2025-07-12 15:47:43.790345 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.790353 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.790361 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.790369 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:47:43.790376 | orchestrator | 2025-07-12 15:47:43.790384 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-07-12 15:47:43.790392 | orchestrator | Saturday 12 July 2025 15:40:13 +0000 (0:00:00.833) 0:03:35.111 ********* 2025-07-12 15:47:43.790400 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.790406 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.790413 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.790419 | orchestrator | 2025-07-12 15:47:43.790426 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-07-12 15:47:43.790432 | orchestrator | Saturday 12 July 2025 15:40:13 +0000 (0:00:00.273) 0:03:35.385 ********* 2025-07-12 15:47:43.790438 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:47:43.790445 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:47:43.790452 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:47:43.790458 | orchestrator | 2025-07-12 15:47:43.790466 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-07-12 15:47:43.790479 | orchestrator | Saturday 12 July 2025 15:40:14 +0000 (0:00:01.119) 0:03:36.504 ********* 2025-07-12 15:47:43.790490 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 15:47:43.790501 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 15:47:43.790542 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 15:47:43.790551 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.790558 | orchestrator | 2025-07-12 15:47:43.790564 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-07-12 15:47:43.790576 | orchestrator | Saturday 12 July 2025 15:40:15 +0000 (0:00:00.843) 0:03:37.348 ********* 2025-07-12 15:47:43.790583 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.790589 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.790596 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.790602 | orchestrator | 2025-07-12 15:47:43.790609 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-07-12 15:47:43.790616 | orchestrator | Saturday 12 July 2025 15:40:16 +0000 (0:00:00.282) 0:03:37.630 ********* 2025-07-12 15:47:43.790622 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.790629 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.790635 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.790642 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:47:43.790648 | orchestrator | 2025-07-12 15:47:43.790655 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-07-12 15:47:43.790662 | orchestrator | Saturday 12 July 2025 15:40:16 +0000 (0:00:00.789) 0:03:38.420 ********* 2025-07-12 15:47:43.790668 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.790680 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.790687 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.790694 | orchestrator | 2025-07-12 15:47:43.790700 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-07-12 15:47:43.790707 | orchestrator | Saturday 12 July 2025 15:40:17 +0000 (0:00:00.264) 0:03:38.684 ********* 2025-07-12 15:47:43.790713 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:47:43.790720 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:47:43.790727 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:47:43.790733 | orchestrator | 2025-07-12 15:47:43.790740 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-07-12 15:47:43.790746 | orchestrator | Saturday 12 July 2025 15:40:18 +0000 (0:00:01.138) 0:03:39.823 ********* 2025-07-12 15:47:43.790753 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 15:47:43.790759 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 15:47:43.790766 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 15:47:43.790773 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.790779 | orchestrator | 2025-07-12 15:47:43.790786 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-07-12 15:47:43.790792 | orchestrator | Saturday 12 July 2025 15:40:19 +0000 (0:00:00.790) 0:03:40.614 ********* 2025-07-12 15:47:43.790799 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.790805 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.790812 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.790818 | orchestrator | 2025-07-12 15:47:43.790825 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-07-12 15:47:43.790831 | orchestrator | Saturday 12 July 2025 15:40:19 +0000 (0:00:00.334) 0:03:40.949 ********* 2025-07-12 15:47:43.790838 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.790858 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.790865 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.790872 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.790878 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.790885 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.790891 | orchestrator | 2025-07-12 15:47:43.790898 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-07-12 15:47:43.790905 | orchestrator | Saturday 12 July 2025 15:40:20 +0000 (0:00:00.831) 0:03:41.780 ********* 2025-07-12 15:47:43.790911 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.790918 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.790924 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.790931 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:47:43.790937 | orchestrator | 2025-07-12 15:47:43.790944 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-07-12 15:47:43.790956 | orchestrator | Saturday 12 July 2025 15:40:21 +0000 (0:00:01.066) 0:03:42.847 ********* 2025-07-12 15:47:43.790963 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.790970 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.790976 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.790983 | orchestrator | 2025-07-12 15:47:43.790990 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-07-12 15:47:43.790996 | orchestrator | Saturday 12 July 2025 15:40:21 +0000 (0:00:00.340) 0:03:43.187 ********* 2025-07-12 15:47:43.791003 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:47:43.791009 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:47:43.791016 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:47:43.791023 | orchestrator | 2025-07-12 15:47:43.791029 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-07-12 15:47:43.791036 | orchestrator | Saturday 12 July 2025 15:40:22 +0000 (0:00:01.262) 0:03:44.449 ********* 2025-07-12 15:47:43.791042 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-12 15:47:43.791049 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-12 15:47:43.791055 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-12 15:47:43.791062 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.791068 | orchestrator | 2025-07-12 15:47:43.791075 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-07-12 15:47:43.791081 | orchestrator | Saturday 12 July 2025 15:40:23 +0000 (0:00:00.713) 0:03:45.162 ********* 2025-07-12 15:47:43.791088 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.791095 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.791101 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.791108 | orchestrator | 2025-07-12 15:47:43.791114 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-07-12 15:47:43.791121 | orchestrator | 2025-07-12 15:47:43.791127 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-12 15:47:43.791153 | orchestrator | Saturday 12 July 2025 15:40:24 +0000 (0:00:00.635) 0:03:45.798 ********* 2025-07-12 15:47:43.791161 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:47:43.791168 | orchestrator | 2025-07-12 15:47:43.791175 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-12 15:47:43.791181 | orchestrator | Saturday 12 July 2025 15:40:24 +0000 (0:00:00.439) 0:03:46.237 ********* 2025-07-12 15:47:43.791188 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:47:43.791195 | orchestrator | 2025-07-12 15:47:43.791201 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-12 15:47:43.791208 | orchestrator | Saturday 12 July 2025 15:40:25 +0000 (0:00:00.681) 0:03:46.919 ********* 2025-07-12 15:47:43.791214 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.791221 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.791227 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.791234 | orchestrator | 2025-07-12 15:47:43.791240 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-12 15:47:43.791247 | orchestrator | Saturday 12 July 2025 15:40:26 +0000 (0:00:00.727) 0:03:47.646 ********* 2025-07-12 15:47:43.791253 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.791263 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.791270 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.791276 | orchestrator | 2025-07-12 15:47:43.791283 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-12 15:47:43.791289 | orchestrator | Saturday 12 July 2025 15:40:26 +0000 (0:00:00.291) 0:03:47.937 ********* 2025-07-12 15:47:43.791296 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.791302 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.791313 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.791320 | orchestrator | 2025-07-12 15:47:43.791326 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-12 15:47:43.791333 | orchestrator | Saturday 12 July 2025 15:40:26 +0000 (0:00:00.302) 0:03:48.240 ********* 2025-07-12 15:47:43.791339 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.791346 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.791352 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.791359 | orchestrator | 2025-07-12 15:47:43.791366 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-12 15:47:43.791372 | orchestrator | Saturday 12 July 2025 15:40:27 +0000 (0:00:00.583) 0:03:48.824 ********* 2025-07-12 15:47:43.791379 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.791385 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.791392 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.791398 | orchestrator | 2025-07-12 15:47:43.791405 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-12 15:47:43.791411 | orchestrator | Saturday 12 July 2025 15:40:28 +0000 (0:00:00.738) 0:03:49.563 ********* 2025-07-12 15:47:43.791418 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.791425 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.791431 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.791438 | orchestrator | 2025-07-12 15:47:43.791444 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-12 15:47:43.791451 | orchestrator | Saturday 12 July 2025 15:40:28 +0000 (0:00:00.335) 0:03:49.898 ********* 2025-07-12 15:47:43.791457 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.791464 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.791470 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.791477 | orchestrator | 2025-07-12 15:47:43.791483 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-12 15:47:43.791490 | orchestrator | Saturday 12 July 2025 15:40:28 +0000 (0:00:00.313) 0:03:50.211 ********* 2025-07-12 15:47:43.791496 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.791503 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.791509 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.791516 | orchestrator | 2025-07-12 15:47:43.791522 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-12 15:47:43.791529 | orchestrator | Saturday 12 July 2025 15:40:29 +0000 (0:00:00.965) 0:03:51.177 ********* 2025-07-12 15:47:43.791536 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.791542 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.791549 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.791555 | orchestrator | 2025-07-12 15:47:43.791562 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-12 15:47:43.791568 | orchestrator | Saturday 12 July 2025 15:40:30 +0000 (0:00:00.746) 0:03:51.923 ********* 2025-07-12 15:47:43.791575 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.791582 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.791588 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.791595 | orchestrator | 2025-07-12 15:47:43.791601 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-12 15:47:43.791608 | orchestrator | Saturday 12 July 2025 15:40:30 +0000 (0:00:00.371) 0:03:52.295 ********* 2025-07-12 15:47:43.791614 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.791621 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.791627 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.791634 | orchestrator | 2025-07-12 15:47:43.791641 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-12 15:47:43.791647 | orchestrator | Saturday 12 July 2025 15:40:31 +0000 (0:00:00.311) 0:03:52.606 ********* 2025-07-12 15:47:43.791654 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.791660 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.791667 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.791674 | orchestrator | 2025-07-12 15:47:43.791684 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-12 15:47:43.791691 | orchestrator | Saturday 12 July 2025 15:40:31 +0000 (0:00:00.541) 0:03:53.148 ********* 2025-07-12 15:47:43.791697 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.791704 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.791710 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.791717 | orchestrator | 2025-07-12 15:47:43.791723 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-12 15:47:43.791748 | orchestrator | Saturday 12 July 2025 15:40:31 +0000 (0:00:00.317) 0:03:53.466 ********* 2025-07-12 15:47:43.791755 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.791762 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.791769 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.791775 | orchestrator | 2025-07-12 15:47:43.791782 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-12 15:47:43.791788 | orchestrator | Saturday 12 July 2025 15:40:32 +0000 (0:00:00.355) 0:03:53.821 ********* 2025-07-12 15:47:43.791795 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.791801 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.791808 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.791814 | orchestrator | 2025-07-12 15:47:43.791821 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-12 15:47:43.791828 | orchestrator | Saturday 12 July 2025 15:40:32 +0000 (0:00:00.284) 0:03:54.105 ********* 2025-07-12 15:47:43.791834 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.791841 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.791861 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.791868 | orchestrator | 2025-07-12 15:47:43.791875 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-12 15:47:43.791882 | orchestrator | Saturday 12 July 2025 15:40:33 +0000 (0:00:00.526) 0:03:54.632 ********* 2025-07-12 15:47:43.791888 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.791898 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.791905 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.791912 | orchestrator | 2025-07-12 15:47:43.791918 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-12 15:47:43.791925 | orchestrator | Saturday 12 July 2025 15:40:33 +0000 (0:00:00.311) 0:03:54.944 ********* 2025-07-12 15:47:43.791932 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.791938 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.791945 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.791952 | orchestrator | 2025-07-12 15:47:43.791958 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-12 15:47:43.791965 | orchestrator | Saturday 12 July 2025 15:40:33 +0000 (0:00:00.292) 0:03:55.236 ********* 2025-07-12 15:47:43.791972 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.791978 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.791985 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.791991 | orchestrator | 2025-07-12 15:47:43.791998 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-07-12 15:47:43.792005 | orchestrator | Saturday 12 July 2025 15:40:34 +0000 (0:00:00.756) 0:03:55.993 ********* 2025-07-12 15:47:43.792011 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.792018 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.792024 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.792031 | orchestrator | 2025-07-12 15:47:43.792038 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-07-12 15:47:43.792044 | orchestrator | Saturday 12 July 2025 15:40:34 +0000 (0:00:00.353) 0:03:56.346 ********* 2025-07-12 15:47:43.792051 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:47:43.792058 | orchestrator | 2025-07-12 15:47:43.792064 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-07-12 15:47:43.792071 | orchestrator | Saturday 12 July 2025 15:40:35 +0000 (0:00:00.605) 0:03:56.951 ********* 2025-07-12 15:47:43.792082 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.792089 | orchestrator | 2025-07-12 15:47:43.792096 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-07-12 15:47:43.792102 | orchestrator | Saturday 12 July 2025 15:40:35 +0000 (0:00:00.144) 0:03:57.096 ********* 2025-07-12 15:47:43.792109 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-07-12 15:47:43.792116 | orchestrator | 2025-07-12 15:47:43.792122 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-07-12 15:47:43.792129 | orchestrator | Saturday 12 July 2025 15:40:37 +0000 (0:00:01.498) 0:03:58.595 ********* 2025-07-12 15:47:43.792135 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.792142 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.792149 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.792155 | orchestrator | 2025-07-12 15:47:43.792162 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-07-12 15:47:43.792169 | orchestrator | Saturday 12 July 2025 15:40:37 +0000 (0:00:00.331) 0:03:58.926 ********* 2025-07-12 15:47:43.792176 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.792183 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.792189 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.792195 | orchestrator | 2025-07-12 15:47:43.792202 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-07-12 15:47:43.792209 | orchestrator | Saturday 12 July 2025 15:40:37 +0000 (0:00:00.310) 0:03:59.237 ********* 2025-07-12 15:47:43.792215 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:47:43.792222 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:47:43.792229 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:47:43.792235 | orchestrator | 2025-07-12 15:47:43.792242 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-07-12 15:47:43.792248 | orchestrator | Saturday 12 July 2025 15:40:38 +0000 (0:00:01.258) 0:04:00.495 ********* 2025-07-12 15:47:43.792255 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:47:43.792262 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:47:43.792268 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:47:43.792275 | orchestrator | 2025-07-12 15:47:43.792282 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-07-12 15:47:43.792288 | orchestrator | Saturday 12 July 2025 15:40:40 +0000 (0:00:01.099) 0:04:01.595 ********* 2025-07-12 15:47:43.792295 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:47:43.792301 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:47:43.792308 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:47:43.792315 | orchestrator | 2025-07-12 15:47:43.792321 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-07-12 15:47:43.792328 | orchestrator | Saturday 12 July 2025 15:40:40 +0000 (0:00:00.723) 0:04:02.318 ********* 2025-07-12 15:47:43.792334 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.792341 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.792348 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.792354 | orchestrator | 2025-07-12 15:47:43.792380 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-07-12 15:47:43.792388 | orchestrator | Saturday 12 July 2025 15:40:41 +0000 (0:00:00.791) 0:04:03.109 ********* 2025-07-12 15:47:43.792395 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:47:43.792401 | orchestrator | 2025-07-12 15:47:43.792408 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-07-12 15:47:43.792414 | orchestrator | Saturday 12 July 2025 15:40:42 +0000 (0:00:01.354) 0:04:04.464 ********* 2025-07-12 15:47:43.792421 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.792428 | orchestrator | 2025-07-12 15:47:43.792434 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-07-12 15:47:43.792441 | orchestrator | Saturday 12 July 2025 15:40:43 +0000 (0:00:00.683) 0:04:05.147 ********* 2025-07-12 15:47:43.792447 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 15:47:43.792454 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 15:47:43.792464 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 15:47:43.792471 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 15:47:43.792478 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-07-12 15:47:43.792487 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 15:47:43.792494 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 15:47:43.792501 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-07-12 15:47:43.792507 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 15:47:43.792514 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-07-12 15:47:43.792521 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-07-12 15:47:43.792527 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-07-12 15:47:43.792534 | orchestrator | 2025-07-12 15:47:43.792540 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-07-12 15:47:43.792547 | orchestrator | Saturday 12 July 2025 15:40:46 +0000 (0:00:03.186) 0:04:08.334 ********* 2025-07-12 15:47:43.792553 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:47:43.792560 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:47:43.792566 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:47:43.792573 | orchestrator | 2025-07-12 15:47:43.792584 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-07-12 15:47:43.792597 | orchestrator | Saturday 12 July 2025 15:40:48 +0000 (0:00:01.362) 0:04:09.696 ********* 2025-07-12 15:47:43.792607 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.792618 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.792628 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.792639 | orchestrator | 2025-07-12 15:47:43.792650 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-07-12 15:47:43.792661 | orchestrator | Saturday 12 July 2025 15:40:48 +0000 (0:00:00.265) 0:04:09.962 ********* 2025-07-12 15:47:43.792673 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.792684 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.792711 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.792722 | orchestrator | 2025-07-12 15:47:43.792729 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-07-12 15:47:43.792736 | orchestrator | Saturday 12 July 2025 15:40:48 +0000 (0:00:00.270) 0:04:10.232 ********* 2025-07-12 15:47:43.792742 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:47:43.792749 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:47:43.792755 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:47:43.792761 | orchestrator | 2025-07-12 15:47:43.792768 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-07-12 15:47:43.792774 | orchestrator | Saturday 12 July 2025 15:40:50 +0000 (0:00:01.879) 0:04:12.112 ********* 2025-07-12 15:47:43.792781 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:47:43.792787 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:47:43.792794 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:47:43.792800 | orchestrator | 2025-07-12 15:47:43.792807 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-07-12 15:47:43.792813 | orchestrator | Saturday 12 July 2025 15:40:52 +0000 (0:00:01.669) 0:04:13.782 ********* 2025-07-12 15:47:43.792820 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.792826 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.792833 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.792840 | orchestrator | 2025-07-12 15:47:43.792865 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-07-12 15:47:43.792891 | orchestrator | Saturday 12 July 2025 15:40:52 +0000 (0:00:00.320) 0:04:14.103 ********* 2025-07-12 15:47:43.792902 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:47:43.792913 | orchestrator | 2025-07-12 15:47:43.792934 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-07-12 15:47:43.792946 | orchestrator | Saturday 12 July 2025 15:40:53 +0000 (0:00:00.584) 0:04:14.687 ********* 2025-07-12 15:47:43.792953 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.792960 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.792966 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.792973 | orchestrator | 2025-07-12 15:47:43.792979 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-07-12 15:47:43.792985 | orchestrator | Saturday 12 July 2025 15:40:53 +0000 (0:00:00.703) 0:04:15.391 ********* 2025-07-12 15:47:43.792992 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.792998 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.793005 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.793011 | orchestrator | 2025-07-12 15:47:43.793018 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-07-12 15:47:43.793025 | orchestrator | Saturday 12 July 2025 15:40:54 +0000 (0:00:00.350) 0:04:15.741 ********* 2025-07-12 15:47:43.793031 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:47:43.793038 | orchestrator | 2025-07-12 15:47:43.793044 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-07-12 15:47:43.793082 | orchestrator | Saturday 12 July 2025 15:40:54 +0000 (0:00:00.550) 0:04:16.292 ********* 2025-07-12 15:47:43.793090 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:47:43.793097 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:47:43.793103 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:47:43.793110 | orchestrator | 2025-07-12 15:47:43.793116 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-07-12 15:47:43.793123 | orchestrator | Saturday 12 July 2025 15:40:56 +0000 (0:00:01.934) 0:04:18.227 ********* 2025-07-12 15:47:43.793129 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:47:43.793136 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:47:43.793142 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:47:43.793149 | orchestrator | 2025-07-12 15:47:43.793155 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-07-12 15:47:43.793162 | orchestrator | Saturday 12 July 2025 15:40:57 +0000 (0:00:01.180) 0:04:19.407 ********* 2025-07-12 15:47:43.793168 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:47:43.793175 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:47:43.793181 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:47:43.793188 | orchestrator | 2025-07-12 15:47:43.793194 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-07-12 15:47:43.793201 | orchestrator | Saturday 12 July 2025 15:40:59 +0000 (0:00:01.855) 0:04:21.263 ********* 2025-07-12 15:47:43.793212 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:47:43.793219 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:47:43.793226 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:47:43.793232 | orchestrator | 2025-07-12 15:47:43.793238 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-07-12 15:47:43.793245 | orchestrator | Saturday 12 July 2025 15:41:01 +0000 (0:00:02.067) 0:04:23.331 ********* 2025-07-12 15:47:43.793252 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:47:43.793304 | orchestrator | 2025-07-12 15:47:43.793312 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-07-12 15:47:43.793319 | orchestrator | Saturday 12 July 2025 15:41:02 +0000 (0:00:00.833) 0:04:24.164 ********* 2025-07-12 15:47:43.793325 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-07-12 15:47:43.793332 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.793339 | orchestrator | 2025-07-12 15:47:43.793345 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-07-12 15:47:43.793352 | orchestrator | Saturday 12 July 2025 15:41:24 +0000 (0:00:21.908) 0:04:46.073 ********* 2025-07-12 15:47:43.793364 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.793370 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.793377 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.793383 | orchestrator | 2025-07-12 15:47:43.793390 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-07-12 15:47:43.793397 | orchestrator | Saturday 12 July 2025 15:41:34 +0000 (0:00:09.984) 0:04:56.058 ********* 2025-07-12 15:47:43.793403 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.793410 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.793416 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.793423 | orchestrator | 2025-07-12 15:47:43.793429 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-07-12 15:47:43.793436 | orchestrator | Saturday 12 July 2025 15:41:34 +0000 (0:00:00.314) 0:04:56.372 ********* 2025-07-12 15:47:43.793443 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5cb855949e5e7220fcb171dc4792e3634336bb03'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-07-12 15:47:43.793451 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5cb855949e5e7220fcb171dc4792e3634336bb03'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-07-12 15:47:43.793459 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5cb855949e5e7220fcb171dc4792e3634336bb03'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-07-12 15:47:43.793467 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5cb855949e5e7220fcb171dc4792e3634336bb03'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-07-12 15:47:43.793498 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5cb855949e5e7220fcb171dc4792e3634336bb03'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-07-12 15:47:43.793507 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5cb855949e5e7220fcb171dc4792e3634336bb03'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__5cb855949e5e7220fcb171dc4792e3634336bb03'}])  2025-07-12 15:47:43.793515 | orchestrator | 2025-07-12 15:47:43.793522 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-12 15:47:43.793528 | orchestrator | Saturday 12 July 2025 15:41:49 +0000 (0:00:15.004) 0:05:11.377 ********* 2025-07-12 15:47:43.793535 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.793542 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.793548 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.793555 | orchestrator | 2025-07-12 15:47:43.793562 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-07-12 15:47:43.793572 | orchestrator | Saturday 12 July 2025 15:41:50 +0000 (0:00:00.347) 0:05:11.725 ********* 2025-07-12 15:47:43.793583 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:47:43.793590 | orchestrator | 2025-07-12 15:47:43.793597 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-07-12 15:47:43.793603 | orchestrator | Saturday 12 July 2025 15:41:51 +0000 (0:00:00.926) 0:05:12.651 ********* 2025-07-12 15:47:43.793610 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.793617 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.793623 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.793630 | orchestrator | 2025-07-12 15:47:43.793636 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-07-12 15:47:43.793643 | orchestrator | Saturday 12 July 2025 15:41:51 +0000 (0:00:00.376) 0:05:13.028 ********* 2025-07-12 15:47:43.793650 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.793656 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.793663 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.793670 | orchestrator | 2025-07-12 15:47:43.793676 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-07-12 15:47:43.793683 | orchestrator | Saturday 12 July 2025 15:41:51 +0000 (0:00:00.371) 0:05:13.399 ********* 2025-07-12 15:47:43.793689 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-12 15:47:43.793696 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-12 15:47:43.793703 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-12 15:47:43.793709 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.793716 | orchestrator | 2025-07-12 15:47:43.793723 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-07-12 15:47:43.793729 | orchestrator | Saturday 12 July 2025 15:41:52 +0000 (0:00:00.937) 0:05:14.337 ********* 2025-07-12 15:47:43.793736 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.793742 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.793749 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.793756 | orchestrator | 2025-07-12 15:47:43.793763 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-07-12 15:47:43.793769 | orchestrator | 2025-07-12 15:47:43.793776 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-12 15:47:43.793783 | orchestrator | Saturday 12 July 2025 15:41:53 +0000 (0:00:00.774) 0:05:15.111 ********* 2025-07-12 15:47:43.793789 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:47:43.793796 | orchestrator | 2025-07-12 15:47:43.793803 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-12 15:47:43.793809 | orchestrator | Saturday 12 July 2025 15:41:54 +0000 (0:00:00.490) 0:05:15.602 ********* 2025-07-12 15:47:43.793816 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:47:43.793823 | orchestrator | 2025-07-12 15:47:43.793829 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-12 15:47:43.793836 | orchestrator | Saturday 12 July 2025 15:41:54 +0000 (0:00:00.713) 0:05:16.315 ********* 2025-07-12 15:47:43.793843 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.793862 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.793869 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.793875 | orchestrator | 2025-07-12 15:47:43.793882 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-12 15:47:43.793889 | orchestrator | Saturday 12 July 2025 15:41:55 +0000 (0:00:00.676) 0:05:16.992 ********* 2025-07-12 15:47:43.793895 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.793902 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.793909 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.793915 | orchestrator | 2025-07-12 15:47:43.793922 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-12 15:47:43.793933 | orchestrator | Saturday 12 July 2025 15:41:55 +0000 (0:00:00.366) 0:05:17.358 ********* 2025-07-12 15:47:43.793940 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.793947 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.793953 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.793960 | orchestrator | 2025-07-12 15:47:43.793966 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-12 15:47:43.793973 | orchestrator | Saturday 12 July 2025 15:41:56 +0000 (0:00:00.511) 0:05:17.869 ********* 2025-07-12 15:47:43.793980 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.793986 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.793993 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.793999 | orchestrator | 2025-07-12 15:47:43.794053 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-12 15:47:43.794063 | orchestrator | Saturday 12 July 2025 15:41:56 +0000 (0:00:00.310) 0:05:18.180 ********* 2025-07-12 15:47:43.794069 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.794076 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.794083 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.794089 | orchestrator | 2025-07-12 15:47:43.794096 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-12 15:47:43.794102 | orchestrator | Saturday 12 July 2025 15:41:57 +0000 (0:00:00.657) 0:05:18.838 ********* 2025-07-12 15:47:43.794109 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.794116 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.794122 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.794128 | orchestrator | 2025-07-12 15:47:43.794135 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-12 15:47:43.794141 | orchestrator | Saturday 12 July 2025 15:41:57 +0000 (0:00:00.305) 0:05:19.143 ********* 2025-07-12 15:47:43.794148 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.794154 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.794161 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.794167 | orchestrator | 2025-07-12 15:47:43.794174 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-12 15:47:43.794187 | orchestrator | Saturday 12 July 2025 15:41:58 +0000 (0:00:00.542) 0:05:19.685 ********* 2025-07-12 15:47:43.794194 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.794201 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.794207 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.794214 | orchestrator | 2025-07-12 15:47:43.794220 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-12 15:47:43.794227 | orchestrator | Saturday 12 July 2025 15:41:58 +0000 (0:00:00.758) 0:05:20.444 ********* 2025-07-12 15:47:43.794233 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.794240 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.794246 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.794252 | orchestrator | 2025-07-12 15:47:43.794259 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-12 15:47:43.794266 | orchestrator | Saturday 12 July 2025 15:41:59 +0000 (0:00:00.806) 0:05:21.250 ********* 2025-07-12 15:47:43.794272 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.794279 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.794285 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.794292 | orchestrator | 2025-07-12 15:47:43.794298 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-12 15:47:43.794305 | orchestrator | Saturday 12 July 2025 15:41:59 +0000 (0:00:00.306) 0:05:21.557 ********* 2025-07-12 15:47:43.794311 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.794318 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.794324 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.794330 | orchestrator | 2025-07-12 15:47:43.794337 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-12 15:47:43.794344 | orchestrator | Saturday 12 July 2025 15:42:00 +0000 (0:00:00.426) 0:05:21.984 ********* 2025-07-12 15:47:43.794355 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.794362 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.794368 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.794375 | orchestrator | 2025-07-12 15:47:43.794381 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-12 15:47:43.794388 | orchestrator | Saturday 12 July 2025 15:42:00 +0000 (0:00:00.266) 0:05:22.251 ********* 2025-07-12 15:47:43.794394 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.794401 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.794407 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.794413 | orchestrator | 2025-07-12 15:47:43.794420 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-12 15:47:43.794427 | orchestrator | Saturday 12 July 2025 15:42:00 +0000 (0:00:00.276) 0:05:22.528 ********* 2025-07-12 15:47:43.794433 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.794439 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.794446 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.794452 | orchestrator | 2025-07-12 15:47:43.794459 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-12 15:47:43.794465 | orchestrator | Saturday 12 July 2025 15:42:01 +0000 (0:00:00.283) 0:05:22.812 ********* 2025-07-12 15:47:43.794472 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.794478 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.794485 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.794491 | orchestrator | 2025-07-12 15:47:43.794498 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-12 15:47:43.794504 | orchestrator | Saturday 12 July 2025 15:42:01 +0000 (0:00:00.435) 0:05:23.247 ********* 2025-07-12 15:47:43.794511 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.794517 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.794524 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.794530 | orchestrator | 2025-07-12 15:47:43.794537 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-12 15:47:43.794543 | orchestrator | Saturday 12 July 2025 15:42:01 +0000 (0:00:00.265) 0:05:23.512 ********* 2025-07-12 15:47:43.794550 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.794556 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.794563 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.794569 | orchestrator | 2025-07-12 15:47:43.794576 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-12 15:47:43.794583 | orchestrator | Saturday 12 July 2025 15:42:02 +0000 (0:00:00.277) 0:05:23.789 ********* 2025-07-12 15:47:43.794589 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.794595 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.794602 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.794608 | orchestrator | 2025-07-12 15:47:43.794615 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-12 15:47:43.794621 | orchestrator | Saturday 12 July 2025 15:42:02 +0000 (0:00:00.276) 0:05:24.066 ********* 2025-07-12 15:47:43.794628 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.794634 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.794640 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.794647 | orchestrator | 2025-07-12 15:47:43.794654 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-07-12 15:47:43.794679 | orchestrator | Saturday 12 July 2025 15:42:03 +0000 (0:00:00.646) 0:05:24.713 ********* 2025-07-12 15:47:43.794687 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 15:47:43.794694 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 15:47:43.794701 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 15:47:43.794707 | orchestrator | 2025-07-12 15:47:43.794714 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-07-12 15:47:43.794720 | orchestrator | Saturday 12 July 2025 15:42:03 +0000 (0:00:00.495) 0:05:25.208 ********* 2025-07-12 15:47:43.794731 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:47:43.794738 | orchestrator | 2025-07-12 15:47:43.794745 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-07-12 15:47:43.794751 | orchestrator | Saturday 12 July 2025 15:42:04 +0000 (0:00:00.464) 0:05:25.672 ********* 2025-07-12 15:47:43.794758 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:47:43.794764 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:47:43.794771 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:47:43.794777 | orchestrator | 2025-07-12 15:47:43.794787 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-07-12 15:47:43.794794 | orchestrator | Saturday 12 July 2025 15:42:04 +0000 (0:00:00.809) 0:05:26.482 ********* 2025-07-12 15:47:43.794800 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.794807 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.794813 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.794820 | orchestrator | 2025-07-12 15:47:43.794826 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-07-12 15:47:43.794833 | orchestrator | Saturday 12 July 2025 15:42:05 +0000 (0:00:00.305) 0:05:26.787 ********* 2025-07-12 15:47:43.794840 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 15:47:43.794879 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 15:47:43.794887 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 15:47:43.794893 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-07-12 15:47:43.794900 | orchestrator | 2025-07-12 15:47:43.794906 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-07-12 15:47:43.794913 | orchestrator | Saturday 12 July 2025 15:42:14 +0000 (0:00:09.401) 0:05:36.189 ********* 2025-07-12 15:47:43.794919 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.794925 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.794931 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.794937 | orchestrator | 2025-07-12 15:47:43.794943 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-07-12 15:47:43.794949 | orchestrator | Saturday 12 July 2025 15:42:15 +0000 (0:00:00.395) 0:05:36.584 ********* 2025-07-12 15:47:43.794956 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-07-12 15:47:43.794962 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-07-12 15:47:43.794968 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-07-12 15:47:43.794974 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 15:47:43.794980 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-07-12 15:47:43.794986 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 15:47:43.794992 | orchestrator | 2025-07-12 15:47:43.794999 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-07-12 15:47:43.795005 | orchestrator | Saturday 12 July 2025 15:42:17 +0000 (0:00:02.204) 0:05:38.788 ********* 2025-07-12 15:47:43.795011 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-07-12 15:47:43.795017 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-07-12 15:47:43.795023 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-07-12 15:47:43.795029 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 15:47:43.795036 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-07-12 15:47:43.795042 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-07-12 15:47:43.795048 | orchestrator | 2025-07-12 15:47:43.795054 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-07-12 15:47:43.795060 | orchestrator | Saturday 12 July 2025 15:42:18 +0000 (0:00:01.451) 0:05:40.240 ********* 2025-07-12 15:47:43.795066 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.795072 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.795078 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.795090 | orchestrator | 2025-07-12 15:47:43.795097 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-07-12 15:47:43.795103 | orchestrator | Saturday 12 July 2025 15:42:19 +0000 (0:00:00.677) 0:05:40.917 ********* 2025-07-12 15:47:43.795109 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.795115 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.795121 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.795128 | orchestrator | 2025-07-12 15:47:43.795134 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-07-12 15:47:43.795140 | orchestrator | Saturday 12 July 2025 15:42:19 +0000 (0:00:00.329) 0:05:41.247 ********* 2025-07-12 15:47:43.795146 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.795153 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.795159 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.795165 | orchestrator | 2025-07-12 15:47:43.795171 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-07-12 15:47:43.795177 | orchestrator | Saturday 12 July 2025 15:42:19 +0000 (0:00:00.301) 0:05:41.549 ********* 2025-07-12 15:47:43.795183 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:47:43.795189 | orchestrator | 2025-07-12 15:47:43.795195 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-07-12 15:47:43.795201 | orchestrator | Saturday 12 July 2025 15:42:20 +0000 (0:00:00.888) 0:05:42.438 ********* 2025-07-12 15:47:43.795207 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.795233 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.795240 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.795246 | orchestrator | 2025-07-12 15:47:43.795252 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-07-12 15:47:43.795258 | orchestrator | Saturday 12 July 2025 15:42:21 +0000 (0:00:00.309) 0:05:42.747 ********* 2025-07-12 15:47:43.795264 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.795270 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.795276 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.795282 | orchestrator | 2025-07-12 15:47:43.795289 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-07-12 15:47:43.795295 | orchestrator | Saturday 12 July 2025 15:42:21 +0000 (0:00:00.296) 0:05:43.044 ********* 2025-07-12 15:47:43.795301 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:47:43.795307 | orchestrator | 2025-07-12 15:47:43.795313 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-07-12 15:47:43.795319 | orchestrator | Saturday 12 July 2025 15:42:22 +0000 (0:00:00.756) 0:05:43.801 ********* 2025-07-12 15:47:43.795325 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:47:43.795331 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:47:43.795337 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:47:43.795343 | orchestrator | 2025-07-12 15:47:43.795352 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-07-12 15:47:43.795358 | orchestrator | Saturday 12 July 2025 15:42:23 +0000 (0:00:01.244) 0:05:45.045 ********* 2025-07-12 15:47:43.795364 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:47:43.795370 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:47:43.795377 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:47:43.795383 | orchestrator | 2025-07-12 15:47:43.795389 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-07-12 15:47:43.795395 | orchestrator | Saturday 12 July 2025 15:42:24 +0000 (0:00:01.140) 0:05:46.186 ********* 2025-07-12 15:47:43.795401 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:47:43.795407 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:47:43.795413 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:47:43.795419 | orchestrator | 2025-07-12 15:47:43.795425 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-07-12 15:47:43.795431 | orchestrator | Saturday 12 July 2025 15:42:26 +0000 (0:00:01.951) 0:05:48.137 ********* 2025-07-12 15:47:43.795441 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:47:43.795447 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:47:43.795453 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:47:43.795459 | orchestrator | 2025-07-12 15:47:43.795465 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-07-12 15:47:43.795471 | orchestrator | Saturday 12 July 2025 15:42:28 +0000 (0:00:01.859) 0:05:49.997 ********* 2025-07-12 15:47:43.795477 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.795484 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.795490 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-07-12 15:47:43.795496 | orchestrator | 2025-07-12 15:47:43.795502 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-07-12 15:47:43.795508 | orchestrator | Saturday 12 July 2025 15:42:28 +0000 (0:00:00.395) 0:05:50.392 ********* 2025-07-12 15:47:43.795514 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-07-12 15:47:43.795520 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-07-12 15:47:43.795526 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-07-12 15:47:43.795532 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-07-12 15:47:43.795539 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-07-12 15:47:43.795545 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-07-12 15:47:43.795551 | orchestrator | 2025-07-12 15:47:43.795557 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-07-12 15:47:43.795563 | orchestrator | Saturday 12 July 2025 15:42:59 +0000 (0:00:30.224) 0:06:20.617 ********* 2025-07-12 15:47:43.795569 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-07-12 15:47:43.795575 | orchestrator | 2025-07-12 15:47:43.795581 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-07-12 15:47:43.795587 | orchestrator | Saturday 12 July 2025 15:43:00 +0000 (0:00:01.550) 0:06:22.168 ********* 2025-07-12 15:47:43.795593 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.795600 | orchestrator | 2025-07-12 15:47:43.795606 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-07-12 15:47:43.795612 | orchestrator | Saturday 12 July 2025 15:43:01 +0000 (0:00:00.855) 0:06:23.023 ********* 2025-07-12 15:47:43.795618 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.795624 | orchestrator | 2025-07-12 15:47:43.795630 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-07-12 15:47:43.795636 | orchestrator | Saturday 12 July 2025 15:43:01 +0000 (0:00:00.147) 0:06:23.171 ********* 2025-07-12 15:47:43.795642 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-07-12 15:47:43.795648 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-07-12 15:47:43.795654 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-07-12 15:47:43.795660 | orchestrator | 2025-07-12 15:47:43.795666 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-07-12 15:47:43.795673 | orchestrator | Saturday 12 July 2025 15:43:08 +0000 (0:00:06.647) 0:06:29.819 ********* 2025-07-12 15:47:43.795679 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-07-12 15:47:43.795702 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-07-12 15:47:43.795710 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-07-12 15:47:43.795716 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-07-12 15:47:43.795722 | orchestrator | 2025-07-12 15:47:43.795732 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-12 15:47:43.795739 | orchestrator | Saturday 12 July 2025 15:43:13 +0000 (0:00:04.812) 0:06:34.631 ********* 2025-07-12 15:47:43.795745 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:47:43.795751 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:47:43.795757 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:47:43.795763 | orchestrator | 2025-07-12 15:47:43.795769 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-07-12 15:47:43.795775 | orchestrator | Saturday 12 July 2025 15:43:14 +0000 (0:00:01.072) 0:06:35.703 ********* 2025-07-12 15:47:43.795781 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:47:43.795787 | orchestrator | 2025-07-12 15:47:43.795793 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-07-12 15:47:43.795802 | orchestrator | Saturday 12 July 2025 15:43:14 +0000 (0:00:00.559) 0:06:36.263 ********* 2025-07-12 15:47:43.795808 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.795814 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.795820 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.795826 | orchestrator | 2025-07-12 15:47:43.795832 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-07-12 15:47:43.795838 | orchestrator | Saturday 12 July 2025 15:43:15 +0000 (0:00:00.326) 0:06:36.590 ********* 2025-07-12 15:47:43.795855 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:47:43.795861 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:47:43.795867 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:47:43.795873 | orchestrator | 2025-07-12 15:47:43.795879 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-07-12 15:47:43.795885 | orchestrator | Saturday 12 July 2025 15:43:16 +0000 (0:00:01.806) 0:06:38.396 ********* 2025-07-12 15:47:43.795891 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-12 15:47:43.795897 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-12 15:47:43.795904 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-12 15:47:43.795910 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.795916 | orchestrator | 2025-07-12 15:47:43.795922 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-07-12 15:47:43.795928 | orchestrator | Saturday 12 July 2025 15:43:17 +0000 (0:00:00.658) 0:06:39.054 ********* 2025-07-12 15:47:43.795934 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.795940 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.795946 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.795952 | orchestrator | 2025-07-12 15:47:43.795958 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-07-12 15:47:43.795964 | orchestrator | 2025-07-12 15:47:43.795970 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-12 15:47:43.795976 | orchestrator | Saturday 12 July 2025 15:43:18 +0000 (0:00:00.525) 0:06:39.580 ********* 2025-07-12 15:47:43.795982 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:47:43.795988 | orchestrator | 2025-07-12 15:47:43.795995 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-12 15:47:43.796001 | orchestrator | Saturday 12 July 2025 15:43:18 +0000 (0:00:00.689) 0:06:40.270 ********* 2025-07-12 15:47:43.796007 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:47:43.796013 | orchestrator | 2025-07-12 15:47:43.796019 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-12 15:47:43.796025 | orchestrator | Saturday 12 July 2025 15:43:19 +0000 (0:00:00.509) 0:06:40.779 ********* 2025-07-12 15:47:43.796031 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.796037 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.796043 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.796053 | orchestrator | 2025-07-12 15:47:43.796059 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-12 15:47:43.796065 | orchestrator | Saturday 12 July 2025 15:43:19 +0000 (0:00:00.280) 0:06:41.060 ********* 2025-07-12 15:47:43.796071 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.796077 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.796083 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.796089 | orchestrator | 2025-07-12 15:47:43.796095 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-12 15:47:43.796101 | orchestrator | Saturday 12 July 2025 15:43:20 +0000 (0:00:00.908) 0:06:41.969 ********* 2025-07-12 15:47:43.796107 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.796113 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.796119 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.796125 | orchestrator | 2025-07-12 15:47:43.796131 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-12 15:47:43.796137 | orchestrator | Saturday 12 July 2025 15:43:21 +0000 (0:00:00.738) 0:06:42.707 ********* 2025-07-12 15:47:43.796143 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.796149 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.796155 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.796161 | orchestrator | 2025-07-12 15:47:43.796167 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-12 15:47:43.796173 | orchestrator | Saturday 12 July 2025 15:43:21 +0000 (0:00:00.634) 0:06:43.341 ********* 2025-07-12 15:47:43.796179 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.796185 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.796191 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.796197 | orchestrator | 2025-07-12 15:47:43.796203 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-12 15:47:43.796226 | orchestrator | Saturday 12 July 2025 15:43:22 +0000 (0:00:00.306) 0:06:43.647 ********* 2025-07-12 15:47:43.796233 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.796240 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.796246 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.796252 | orchestrator | 2025-07-12 15:47:43.796258 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-12 15:47:43.796264 | orchestrator | Saturday 12 July 2025 15:43:22 +0000 (0:00:00.669) 0:06:44.317 ********* 2025-07-12 15:47:43.796270 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.796276 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.796282 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.796288 | orchestrator | 2025-07-12 15:47:43.796294 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-12 15:47:43.796300 | orchestrator | Saturday 12 July 2025 15:43:23 +0000 (0:00:00.296) 0:06:44.614 ********* 2025-07-12 15:47:43.796306 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.796313 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.796319 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.796325 | orchestrator | 2025-07-12 15:47:43.796331 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-12 15:47:43.796337 | orchestrator | Saturday 12 July 2025 15:43:23 +0000 (0:00:00.661) 0:06:45.276 ********* 2025-07-12 15:47:43.796343 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.796349 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.796355 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.796361 | orchestrator | 2025-07-12 15:47:43.796367 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-12 15:47:43.796373 | orchestrator | Saturday 12 July 2025 15:43:24 +0000 (0:00:00.676) 0:06:45.952 ********* 2025-07-12 15:47:43.796379 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.796385 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.796391 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.796397 | orchestrator | 2025-07-12 15:47:43.796403 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-12 15:47:43.796413 | orchestrator | Saturday 12 July 2025 15:43:24 +0000 (0:00:00.541) 0:06:46.494 ********* 2025-07-12 15:47:43.796419 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.796425 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.796431 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.796437 | orchestrator | 2025-07-12 15:47:43.796443 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-12 15:47:43.796450 | orchestrator | Saturday 12 July 2025 15:43:25 +0000 (0:00:00.306) 0:06:46.800 ********* 2025-07-12 15:47:43.796456 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.796462 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.796468 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.796474 | orchestrator | 2025-07-12 15:47:43.796480 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-12 15:47:43.796503 | orchestrator | Saturday 12 July 2025 15:43:25 +0000 (0:00:00.373) 0:06:47.174 ********* 2025-07-12 15:47:43.796509 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.796515 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.796521 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.796527 | orchestrator | 2025-07-12 15:47:43.796534 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-12 15:47:43.796540 | orchestrator | Saturday 12 July 2025 15:43:25 +0000 (0:00:00.351) 0:06:47.525 ********* 2025-07-12 15:47:43.796546 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.796552 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.796558 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.796564 | orchestrator | 2025-07-12 15:47:43.796570 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-12 15:47:43.796576 | orchestrator | Saturday 12 July 2025 15:43:26 +0000 (0:00:00.632) 0:06:48.158 ********* 2025-07-12 15:47:43.796582 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.796588 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.796594 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.796600 | orchestrator | 2025-07-12 15:47:43.796606 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-12 15:47:43.796612 | orchestrator | Saturday 12 July 2025 15:43:26 +0000 (0:00:00.311) 0:06:48.469 ********* 2025-07-12 15:47:43.796618 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.796624 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.796630 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.796636 | orchestrator | 2025-07-12 15:47:43.796642 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-12 15:47:43.796648 | orchestrator | Saturday 12 July 2025 15:43:27 +0000 (0:00:00.312) 0:06:48.782 ********* 2025-07-12 15:47:43.796654 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.796661 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.796667 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.796673 | orchestrator | 2025-07-12 15:47:43.796679 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-12 15:47:43.796685 | orchestrator | Saturday 12 July 2025 15:43:27 +0000 (0:00:00.310) 0:06:49.093 ********* 2025-07-12 15:47:43.796691 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.796697 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.796703 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.796709 | orchestrator | 2025-07-12 15:47:43.796715 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-12 15:47:43.796721 | orchestrator | Saturday 12 July 2025 15:43:28 +0000 (0:00:00.629) 0:06:49.723 ********* 2025-07-12 15:47:43.796727 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.796733 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.796739 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.796745 | orchestrator | 2025-07-12 15:47:43.796751 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-07-12 15:47:43.796757 | orchestrator | Saturday 12 July 2025 15:43:28 +0000 (0:00:00.528) 0:06:50.251 ********* 2025-07-12 15:47:43.796768 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.796774 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.796780 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.796786 | orchestrator | 2025-07-12 15:47:43.796792 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-07-12 15:47:43.796798 | orchestrator | Saturday 12 July 2025 15:43:28 +0000 (0:00:00.299) 0:06:50.551 ********* 2025-07-12 15:47:43.796807 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-12 15:47:43.796814 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 15:47:43.796820 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 15:47:43.796826 | orchestrator | 2025-07-12 15:47:43.796832 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-07-12 15:47:43.796839 | orchestrator | Saturday 12 July 2025 15:43:30 +0000 (0:00:01.050) 0:06:51.601 ********* 2025-07-12 15:47:43.796870 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:47:43.796877 | orchestrator | 2025-07-12 15:47:43.796883 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-07-12 15:47:43.796890 | orchestrator | Saturday 12 July 2025 15:43:30 +0000 (0:00:00.945) 0:06:52.547 ********* 2025-07-12 15:47:43.796896 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.796902 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.796908 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.796914 | orchestrator | 2025-07-12 15:47:43.796920 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-07-12 15:47:43.796929 | orchestrator | Saturday 12 July 2025 15:43:31 +0000 (0:00:00.308) 0:06:52.855 ********* 2025-07-12 15:47:43.796935 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.796941 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.796947 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.796954 | orchestrator | 2025-07-12 15:47:43.796960 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-07-12 15:47:43.796966 | orchestrator | Saturday 12 July 2025 15:43:31 +0000 (0:00:00.302) 0:06:53.158 ********* 2025-07-12 15:47:43.796972 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.796978 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.796984 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.796990 | orchestrator | 2025-07-12 15:47:43.796997 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-07-12 15:47:43.797003 | orchestrator | Saturday 12 July 2025 15:43:32 +0000 (0:00:00.917) 0:06:54.076 ********* 2025-07-12 15:47:43.797009 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.797015 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.797021 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.797027 | orchestrator | 2025-07-12 15:47:43.797034 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-07-12 15:47:43.797040 | orchestrator | Saturday 12 July 2025 15:43:32 +0000 (0:00:00.322) 0:06:54.398 ********* 2025-07-12 15:47:43.797046 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-07-12 15:47:43.797052 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-07-12 15:47:43.797058 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-07-12 15:47:43.797065 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-07-12 15:47:43.797071 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-07-12 15:47:43.797077 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-07-12 15:47:43.797083 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-07-12 15:47:43.797093 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-07-12 15:47:43.797100 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-07-12 15:47:43.797106 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-07-12 15:47:43.797112 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-07-12 15:47:43.797118 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-07-12 15:47:43.797124 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-07-12 15:47:43.797130 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-07-12 15:47:43.797136 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-07-12 15:47:43.797142 | orchestrator | 2025-07-12 15:47:43.797149 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-07-12 15:47:43.797155 | orchestrator | Saturday 12 July 2025 15:43:34 +0000 (0:00:02.139) 0:06:56.538 ********* 2025-07-12 15:47:43.797161 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.797167 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.797173 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.797179 | orchestrator | 2025-07-12 15:47:43.797185 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-07-12 15:47:43.797192 | orchestrator | Saturday 12 July 2025 15:43:35 +0000 (0:00:00.304) 0:06:56.842 ********* 2025-07-12 15:47:43.797198 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:47:43.797204 | orchestrator | 2025-07-12 15:47:43.797210 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-07-12 15:47:43.797216 | orchestrator | Saturday 12 July 2025 15:43:36 +0000 (0:00:00.859) 0:06:57.701 ********* 2025-07-12 15:47:43.797222 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-07-12 15:47:43.797228 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-07-12 15:47:43.797239 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-07-12 15:47:43.797246 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-07-12 15:47:43.797252 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-07-12 15:47:43.797258 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-07-12 15:47:43.797264 | orchestrator | 2025-07-12 15:47:43.797271 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-07-12 15:47:43.797277 | orchestrator | Saturday 12 July 2025 15:43:37 +0000 (0:00:01.034) 0:06:58.735 ********* 2025-07-12 15:47:43.797283 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 15:47:43.797289 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-12 15:47:43.797295 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-12 15:47:43.797301 | orchestrator | 2025-07-12 15:47:43.797308 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-07-12 15:47:43.797314 | orchestrator | Saturday 12 July 2025 15:43:39 +0000 (0:00:02.290) 0:07:01.026 ********* 2025-07-12 15:47:43.797320 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-12 15:47:43.797326 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-12 15:47:43.797332 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:47:43.797343 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-12 15:47:43.797349 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-07-12 15:47:43.797355 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:47:43.797362 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-12 15:47:43.797368 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-07-12 15:47:43.797377 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:47:43.797382 | orchestrator | 2025-07-12 15:47:43.797387 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-07-12 15:47:43.797393 | orchestrator | Saturday 12 July 2025 15:43:40 +0000 (0:00:01.392) 0:07:02.419 ********* 2025-07-12 15:47:43.797399 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-12 15:47:43.797404 | orchestrator | 2025-07-12 15:47:43.797409 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-07-12 15:47:43.797415 | orchestrator | Saturday 12 July 2025 15:43:42 +0000 (0:00:02.042) 0:07:04.462 ********* 2025-07-12 15:47:43.797420 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:47:43.797425 | orchestrator | 2025-07-12 15:47:43.797431 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-07-12 15:47:43.797436 | orchestrator | Saturday 12 July 2025 15:43:43 +0000 (0:00:00.572) 0:07:05.034 ********* 2025-07-12 15:47:43.797442 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-98eaa118-ceae-5fd7-911b-5a5c065fb5e7', 'data_vg': 'ceph-98eaa118-ceae-5fd7-911b-5a5c065fb5e7'}) 2025-07-12 15:47:43.797447 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ed518422-90c3-5ab9-913f-91d667874e9d', 'data_vg': 'ceph-ed518422-90c3-5ab9-913f-91d667874e9d'}) 2025-07-12 15:47:43.797453 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-0c0189bb-8103-55ae-95fc-ac60d34dc15f', 'data_vg': 'ceph-0c0189bb-8103-55ae-95fc-ac60d34dc15f'}) 2025-07-12 15:47:43.797458 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d3106c13-92fd-5dcd-ba4d-74ce9f77b023', 'data_vg': 'ceph-d3106c13-92fd-5dcd-ba4d-74ce9f77b023'}) 2025-07-12 15:47:43.797464 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f', 'data_vg': 'ceph-2608adc8-8e22-540f-a74d-9f1d5d1ddc4f'}) 2025-07-12 15:47:43.797469 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-66e431f6-efaf-5b66-8dd9-edbf314ce410', 'data_vg': 'ceph-66e431f6-efaf-5b66-8dd9-edbf314ce410'}) 2025-07-12 15:47:43.797474 | orchestrator | 2025-07-12 15:47:43.797480 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-07-12 15:47:43.797485 | orchestrator | Saturday 12 July 2025 15:44:23 +0000 (0:00:39.878) 0:07:44.912 ********* 2025-07-12 15:47:43.797491 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.797496 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.797501 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.797507 | orchestrator | 2025-07-12 15:47:43.797512 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-07-12 15:47:43.797517 | orchestrator | Saturday 12 July 2025 15:44:23 +0000 (0:00:00.598) 0:07:45.511 ********* 2025-07-12 15:47:43.797523 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:47:43.797528 | orchestrator | 2025-07-12 15:47:43.797534 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-07-12 15:47:43.797539 | orchestrator | Saturday 12 July 2025 15:44:24 +0000 (0:00:00.587) 0:07:46.099 ********* 2025-07-12 15:47:43.797544 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.797550 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.797555 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.797561 | orchestrator | 2025-07-12 15:47:43.797566 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-07-12 15:47:43.797571 | orchestrator | Saturday 12 July 2025 15:44:25 +0000 (0:00:00.689) 0:07:46.788 ********* 2025-07-12 15:47:43.797577 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.797582 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.797587 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.797593 | orchestrator | 2025-07-12 15:47:43.797598 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-07-12 15:47:43.797604 | orchestrator | Saturday 12 July 2025 15:44:27 +0000 (0:00:02.738) 0:07:49.527 ********* 2025-07-12 15:47:43.797615 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:47:43.797621 | orchestrator | 2025-07-12 15:47:43.797626 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-07-12 15:47:43.797631 | orchestrator | Saturday 12 July 2025 15:44:28 +0000 (0:00:00.512) 0:07:50.039 ********* 2025-07-12 15:47:43.797637 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:47:43.797642 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:47:43.797647 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:47:43.797653 | orchestrator | 2025-07-12 15:47:43.797658 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-07-12 15:47:43.797664 | orchestrator | Saturday 12 July 2025 15:44:29 +0000 (0:00:01.155) 0:07:51.195 ********* 2025-07-12 15:47:43.797669 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:47:43.797674 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:47:43.797680 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:47:43.797685 | orchestrator | 2025-07-12 15:47:43.797690 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-07-12 15:47:43.797696 | orchestrator | Saturday 12 July 2025 15:44:30 +0000 (0:00:01.353) 0:07:52.548 ********* 2025-07-12 15:47:43.797701 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:47:43.797706 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:47:43.797712 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:47:43.797717 | orchestrator | 2025-07-12 15:47:43.797725 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-07-12 15:47:43.797730 | orchestrator | Saturday 12 July 2025 15:44:32 +0000 (0:00:01.624) 0:07:54.173 ********* 2025-07-12 15:47:43.797735 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.797741 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.797746 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.797752 | orchestrator | 2025-07-12 15:47:43.797757 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-07-12 15:47:43.797762 | orchestrator | Saturday 12 July 2025 15:44:32 +0000 (0:00:00.312) 0:07:54.486 ********* 2025-07-12 15:47:43.797768 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.797773 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.797778 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.797784 | orchestrator | 2025-07-12 15:47:43.797789 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-07-12 15:47:43.797794 | orchestrator | Saturday 12 July 2025 15:44:33 +0000 (0:00:00.328) 0:07:54.815 ********* 2025-07-12 15:47:43.797800 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-07-12 15:47:43.797805 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-07-12 15:47:43.797810 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-07-12 15:47:43.797816 | orchestrator | ok: [testbed-node-3] => (item=5) 2025-07-12 15:47:43.797821 | orchestrator | ok: [testbed-node-4] => (item=4) 2025-07-12 15:47:43.797826 | orchestrator | ok: [testbed-node-5] => (item=3) 2025-07-12 15:47:43.797832 | orchestrator | 2025-07-12 15:47:43.797837 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-07-12 15:47:43.797843 | orchestrator | Saturday 12 July 2025 15:44:34 +0000 (0:00:01.305) 0:07:56.121 ********* 2025-07-12 15:47:43.797856 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-07-12 15:47:43.797862 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-07-12 15:47:43.797867 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-07-12 15:47:43.797872 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-07-12 15:47:43.797878 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-07-12 15:47:43.797883 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-07-12 15:47:43.797889 | orchestrator | 2025-07-12 15:47:43.797894 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-07-12 15:47:43.797899 | orchestrator | Saturday 12 July 2025 15:44:36 +0000 (0:00:02.105) 0:07:58.226 ********* 2025-07-12 15:47:43.797908 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-07-12 15:47:43.797914 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-07-12 15:47:43.797919 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-07-12 15:47:43.797924 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-07-12 15:47:43.797930 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-07-12 15:47:43.797935 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-07-12 15:47:43.797940 | orchestrator | 2025-07-12 15:47:43.797946 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-07-12 15:47:43.797951 | orchestrator | Saturday 12 July 2025 15:44:40 +0000 (0:00:03.385) 0:08:01.611 ********* 2025-07-12 15:47:43.797956 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.797962 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.797967 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-07-12 15:47:43.797972 | orchestrator | 2025-07-12 15:47:43.797978 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-07-12 15:47:43.797983 | orchestrator | Saturday 12 July 2025 15:44:42 +0000 (0:00:02.646) 0:08:04.258 ********* 2025-07-12 15:47:43.797988 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.797994 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.797999 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-07-12 15:47:43.798005 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-07-12 15:47:43.798010 | orchestrator | 2025-07-12 15:47:43.798040 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-07-12 15:47:43.798046 | orchestrator | Saturday 12 July 2025 15:44:55 +0000 (0:00:12.971) 0:08:17.230 ********* 2025-07-12 15:47:43.798051 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.798057 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.798062 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.798067 | orchestrator | 2025-07-12 15:47:43.798073 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-12 15:47:43.798078 | orchestrator | Saturday 12 July 2025 15:44:56 +0000 (0:00:00.809) 0:08:18.040 ********* 2025-07-12 15:47:43.798084 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.798089 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.798095 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.798100 | orchestrator | 2025-07-12 15:47:43.798109 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-07-12 15:47:43.798115 | orchestrator | Saturday 12 July 2025 15:44:57 +0000 (0:00:00.603) 0:08:18.644 ********* 2025-07-12 15:47:43.798120 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:47:43.798126 | orchestrator | 2025-07-12 15:47:43.798131 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-07-12 15:47:43.798137 | orchestrator | Saturday 12 July 2025 15:44:57 +0000 (0:00:00.558) 0:08:19.202 ********* 2025-07-12 15:47:43.798142 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 15:47:43.798147 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 15:47:43.798153 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 15:47:43.798158 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.798163 | orchestrator | 2025-07-12 15:47:43.798169 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-07-12 15:47:43.798174 | orchestrator | Saturday 12 July 2025 15:44:58 +0000 (0:00:00.370) 0:08:19.573 ********* 2025-07-12 15:47:43.798180 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.798185 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.798190 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.798196 | orchestrator | 2025-07-12 15:47:43.798204 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-07-12 15:47:43.798209 | orchestrator | Saturday 12 July 2025 15:44:58 +0000 (0:00:00.294) 0:08:19.867 ********* 2025-07-12 15:47:43.798218 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.798224 | orchestrator | 2025-07-12 15:47:43.798229 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-07-12 15:47:43.798234 | orchestrator | Saturday 12 July 2025 15:44:58 +0000 (0:00:00.209) 0:08:20.076 ********* 2025-07-12 15:47:43.798240 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.798245 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.798250 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.798256 | orchestrator | 2025-07-12 15:47:43.798261 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-07-12 15:47:43.798267 | orchestrator | Saturday 12 July 2025 15:44:59 +0000 (0:00:00.618) 0:08:20.694 ********* 2025-07-12 15:47:43.798272 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.798278 | orchestrator | 2025-07-12 15:47:43.798283 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-07-12 15:47:43.798288 | orchestrator | Saturday 12 July 2025 15:44:59 +0000 (0:00:00.223) 0:08:20.917 ********* 2025-07-12 15:47:43.798293 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.798299 | orchestrator | 2025-07-12 15:47:43.798304 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-07-12 15:47:43.798310 | orchestrator | Saturday 12 July 2025 15:44:59 +0000 (0:00:00.206) 0:08:21.124 ********* 2025-07-12 15:47:43.798315 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.798321 | orchestrator | 2025-07-12 15:47:43.798326 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-07-12 15:47:43.798331 | orchestrator | Saturday 12 July 2025 15:44:59 +0000 (0:00:00.115) 0:08:21.239 ********* 2025-07-12 15:47:43.798337 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.798342 | orchestrator | 2025-07-12 15:47:43.798348 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-07-12 15:47:43.798353 | orchestrator | Saturday 12 July 2025 15:44:59 +0000 (0:00:00.197) 0:08:21.437 ********* 2025-07-12 15:47:43.798358 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.798363 | orchestrator | 2025-07-12 15:47:43.798369 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-07-12 15:47:43.798374 | orchestrator | Saturday 12 July 2025 15:45:00 +0000 (0:00:00.201) 0:08:21.639 ********* 2025-07-12 15:47:43.798379 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 15:47:43.798385 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 15:47:43.798390 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 15:47:43.798396 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.798401 | orchestrator | 2025-07-12 15:47:43.798406 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-07-12 15:47:43.798412 | orchestrator | Saturday 12 July 2025 15:45:00 +0000 (0:00:00.354) 0:08:21.993 ********* 2025-07-12 15:47:43.798417 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.798422 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.798428 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.798433 | orchestrator | 2025-07-12 15:47:43.798439 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-07-12 15:47:43.798444 | orchestrator | Saturday 12 July 2025 15:45:00 +0000 (0:00:00.327) 0:08:22.321 ********* 2025-07-12 15:47:43.798449 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.798455 | orchestrator | 2025-07-12 15:47:43.798460 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-07-12 15:47:43.798465 | orchestrator | Saturday 12 July 2025 15:45:01 +0000 (0:00:00.774) 0:08:23.095 ********* 2025-07-12 15:47:43.798471 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.798476 | orchestrator | 2025-07-12 15:47:43.798482 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-07-12 15:47:43.798487 | orchestrator | 2025-07-12 15:47:43.798492 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-12 15:47:43.798502 | orchestrator | Saturday 12 July 2025 15:45:02 +0000 (0:00:00.640) 0:08:23.735 ********* 2025-07-12 15:47:43.798507 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:47:43.798513 | orchestrator | 2025-07-12 15:47:43.798519 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-12 15:47:43.798524 | orchestrator | Saturday 12 July 2025 15:45:03 +0000 (0:00:01.157) 0:08:24.892 ********* 2025-07-12 15:47:43.798532 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:47:43.798538 | orchestrator | 2025-07-12 15:47:43.798544 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-12 15:47:43.798549 | orchestrator | Saturday 12 July 2025 15:45:04 +0000 (0:00:01.171) 0:08:26.064 ********* 2025-07-12 15:47:43.798554 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.798560 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.798565 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.798570 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.798576 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.798581 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.798586 | orchestrator | 2025-07-12 15:47:43.798592 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-12 15:47:43.798597 | orchestrator | Saturday 12 July 2025 15:45:05 +0000 (0:00:00.826) 0:08:26.890 ********* 2025-07-12 15:47:43.798602 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.798608 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.798613 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.798618 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.798624 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.798629 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.798634 | orchestrator | 2025-07-12 15:47:43.798642 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-12 15:47:43.798648 | orchestrator | Saturday 12 July 2025 15:45:06 +0000 (0:00:00.988) 0:08:27.878 ********* 2025-07-12 15:47:43.798653 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.798659 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.798664 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.798669 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.798675 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.798680 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.798685 | orchestrator | 2025-07-12 15:47:43.798691 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-12 15:47:43.798696 | orchestrator | Saturday 12 July 2025 15:45:07 +0000 (0:00:01.232) 0:08:29.111 ********* 2025-07-12 15:47:43.798701 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.798707 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.798712 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.798717 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.798723 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.798728 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.798733 | orchestrator | 2025-07-12 15:47:43.798739 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-12 15:47:43.798744 | orchestrator | Saturday 12 July 2025 15:45:08 +0000 (0:00:01.029) 0:08:30.140 ********* 2025-07-12 15:47:43.798749 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.798755 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.798760 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.798765 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.798771 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.798776 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.798781 | orchestrator | 2025-07-12 15:47:43.798787 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-12 15:47:43.798796 | orchestrator | Saturday 12 July 2025 15:45:09 +0000 (0:00:00.894) 0:08:31.035 ********* 2025-07-12 15:47:43.798801 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.798807 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.798812 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.798817 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.798823 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.798828 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.798833 | orchestrator | 2025-07-12 15:47:43.798838 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-12 15:47:43.798853 | orchestrator | Saturday 12 July 2025 15:45:10 +0000 (0:00:00.592) 0:08:31.628 ********* 2025-07-12 15:47:43.798859 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.798865 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.798870 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.798875 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.798880 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.798886 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.798891 | orchestrator | 2025-07-12 15:47:43.798896 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-12 15:47:43.798902 | orchestrator | Saturday 12 July 2025 15:45:10 +0000 (0:00:00.812) 0:08:32.441 ********* 2025-07-12 15:47:43.798907 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.798912 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.798918 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.798923 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.798928 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.798934 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.798939 | orchestrator | 2025-07-12 15:47:43.798944 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-12 15:47:43.798950 | orchestrator | Saturday 12 July 2025 15:45:11 +0000 (0:00:01.062) 0:08:33.503 ********* 2025-07-12 15:47:43.798955 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.798961 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.798966 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.798971 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.798976 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.798982 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.798987 | orchestrator | 2025-07-12 15:47:43.798992 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-12 15:47:43.798998 | orchestrator | Saturday 12 July 2025 15:45:13 +0000 (0:00:01.323) 0:08:34.827 ********* 2025-07-12 15:47:43.799003 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.799009 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.799014 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.799019 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.799025 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.799030 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.799035 | orchestrator | 2025-07-12 15:47:43.799040 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-12 15:47:43.799046 | orchestrator | Saturday 12 July 2025 15:45:13 +0000 (0:00:00.634) 0:08:35.461 ********* 2025-07-12 15:47:43.799051 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.799059 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.799065 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.799070 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.799076 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.799081 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.799086 | orchestrator | 2025-07-12 15:47:43.799092 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-12 15:47:43.799097 | orchestrator | Saturday 12 July 2025 15:45:14 +0000 (0:00:00.778) 0:08:36.240 ********* 2025-07-12 15:47:43.799103 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.799108 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.799117 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.799122 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.799128 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.799133 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.799138 | orchestrator | 2025-07-12 15:47:43.799144 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-12 15:47:43.799149 | orchestrator | Saturday 12 July 2025 15:45:15 +0000 (0:00:00.583) 0:08:36.823 ********* 2025-07-12 15:47:43.799155 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.799160 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.799165 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.799170 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.799176 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.799181 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.799186 | orchestrator | 2025-07-12 15:47:43.799194 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-12 15:47:43.799200 | orchestrator | Saturday 12 July 2025 15:45:16 +0000 (0:00:00.811) 0:08:37.634 ********* 2025-07-12 15:47:43.799205 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.799210 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.799216 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.799221 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.799226 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.799232 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.799237 | orchestrator | 2025-07-12 15:47:43.799242 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-12 15:47:43.799248 | orchestrator | Saturday 12 July 2025 15:45:16 +0000 (0:00:00.655) 0:08:38.290 ********* 2025-07-12 15:47:43.799253 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.799258 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.799264 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.799269 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.799274 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.799280 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.799285 | orchestrator | 2025-07-12 15:47:43.799290 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-12 15:47:43.799296 | orchestrator | Saturday 12 July 2025 15:45:17 +0000 (0:00:00.791) 0:08:39.082 ********* 2025-07-12 15:47:43.799301 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:47:43.799306 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:47:43.799312 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:47:43.799317 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.799322 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.799328 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.799333 | orchestrator | 2025-07-12 15:47:43.799338 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-12 15:47:43.799344 | orchestrator | Saturday 12 July 2025 15:45:18 +0000 (0:00:00.594) 0:08:39.676 ********* 2025-07-12 15:47:43.799349 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.799354 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.799360 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.799365 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.799370 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.799376 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.799381 | orchestrator | 2025-07-12 15:47:43.799386 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-12 15:47:43.799392 | orchestrator | Saturday 12 July 2025 15:45:18 +0000 (0:00:00.787) 0:08:40.464 ********* 2025-07-12 15:47:43.799397 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.799402 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.799408 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.799413 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.799419 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.799424 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.799433 | orchestrator | 2025-07-12 15:47:43.799439 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-12 15:47:43.799444 | orchestrator | Saturday 12 July 2025 15:45:19 +0000 (0:00:00.588) 0:08:41.052 ********* 2025-07-12 15:47:43.799449 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.799455 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.799460 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.799465 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.799471 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.799476 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.799481 | orchestrator | 2025-07-12 15:47:43.799486 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-07-12 15:47:43.799492 | orchestrator | Saturday 12 July 2025 15:45:20 +0000 (0:00:01.251) 0:08:42.303 ********* 2025-07-12 15:47:43.799497 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:47:43.799503 | orchestrator | 2025-07-12 15:47:43.799508 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-07-12 15:47:43.799513 | orchestrator | Saturday 12 July 2025 15:45:25 +0000 (0:00:04.532) 0:08:46.836 ********* 2025-07-12 15:47:43.799519 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.799524 | orchestrator | 2025-07-12 15:47:43.799529 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-07-12 15:47:43.799535 | orchestrator | Saturday 12 July 2025 15:45:27 +0000 (0:00:02.052) 0:08:48.888 ********* 2025-07-12 15:47:43.799540 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.799545 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:47:43.799551 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:47:43.799556 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:47:43.799561 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:47:43.799567 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:47:43.799572 | orchestrator | 2025-07-12 15:47:43.799577 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-07-12 15:47:43.799583 | orchestrator | Saturday 12 July 2025 15:45:29 +0000 (0:00:01.734) 0:08:50.623 ********* 2025-07-12 15:47:43.799591 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:47:43.799596 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:47:43.799602 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:47:43.799607 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:47:43.799612 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:47:43.799618 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:47:43.799623 | orchestrator | 2025-07-12 15:47:43.799628 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-07-12 15:47:43.799634 | orchestrator | Saturday 12 July 2025 15:45:29 +0000 (0:00:00.921) 0:08:51.544 ********* 2025-07-12 15:47:43.799639 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:47:43.799645 | orchestrator | 2025-07-12 15:47:43.799650 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-07-12 15:47:43.799656 | orchestrator | Saturday 12 July 2025 15:45:31 +0000 (0:00:01.177) 0:08:52.722 ********* 2025-07-12 15:47:43.799661 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:47:43.799666 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:47:43.799672 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:47:43.799677 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:47:43.799682 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:47:43.799690 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:47:43.799696 | orchestrator | 2025-07-12 15:47:43.799701 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-07-12 15:47:43.799706 | orchestrator | Saturday 12 July 2025 15:45:32 +0000 (0:00:01.692) 0:08:54.415 ********* 2025-07-12 15:47:43.799712 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:47:43.799717 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:47:43.799722 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:47:43.799728 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:47:43.799737 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:47:43.799742 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:47:43.799747 | orchestrator | 2025-07-12 15:47:43.799753 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-07-12 15:47:43.799758 | orchestrator | Saturday 12 July 2025 15:45:36 +0000 (0:00:03.232) 0:08:57.647 ********* 2025-07-12 15:47:43.799764 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:47:43.799769 | orchestrator | 2025-07-12 15:47:43.799774 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-07-12 15:47:43.799780 | orchestrator | Saturday 12 July 2025 15:45:37 +0000 (0:00:01.304) 0:08:58.952 ********* 2025-07-12 15:47:43.799785 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.799790 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.799796 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.799801 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.799806 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.799812 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.799817 | orchestrator | 2025-07-12 15:47:43.799822 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-07-12 15:47:43.799828 | orchestrator | Saturday 12 July 2025 15:45:38 +0000 (0:00:00.841) 0:08:59.793 ********* 2025-07-12 15:47:43.799833 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:47:43.799838 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:47:43.799875 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:47:43.799881 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:47:43.799886 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:47:43.799891 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:47:43.799896 | orchestrator | 2025-07-12 15:47:43.799902 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-07-12 15:47:43.799907 | orchestrator | Saturday 12 July 2025 15:45:40 +0000 (0:00:02.100) 0:09:01.894 ********* 2025-07-12 15:47:43.799913 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:47:43.799918 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:47:43.799924 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:47:43.799929 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.799934 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.799939 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.799945 | orchestrator | 2025-07-12 15:47:43.799950 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-07-12 15:47:43.799955 | orchestrator | 2025-07-12 15:47:43.799961 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-12 15:47:43.799966 | orchestrator | Saturday 12 July 2025 15:45:41 +0000 (0:00:01.223) 0:09:03.118 ********* 2025-07-12 15:47:43.799971 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:47:43.799977 | orchestrator | 2025-07-12 15:47:43.799982 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-12 15:47:43.799988 | orchestrator | Saturday 12 July 2025 15:45:42 +0000 (0:00:00.538) 0:09:03.656 ********* 2025-07-12 15:47:43.799993 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:47:43.799998 | orchestrator | 2025-07-12 15:47:43.800004 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-12 15:47:43.800009 | orchestrator | Saturday 12 July 2025 15:45:43 +0000 (0:00:00.922) 0:09:04.578 ********* 2025-07-12 15:47:43.800014 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.800020 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.800025 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.800030 | orchestrator | 2025-07-12 15:47:43.800036 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-12 15:47:43.800041 | orchestrator | Saturday 12 July 2025 15:45:43 +0000 (0:00:00.307) 0:09:04.886 ********* 2025-07-12 15:47:43.800051 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.800057 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.800062 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.800067 | orchestrator | 2025-07-12 15:47:43.800073 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-12 15:47:43.800081 | orchestrator | Saturday 12 July 2025 15:45:43 +0000 (0:00:00.666) 0:09:05.552 ********* 2025-07-12 15:47:43.800087 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.800092 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.800097 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.800103 | orchestrator | 2025-07-12 15:47:43.800108 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-12 15:47:43.800114 | orchestrator | Saturday 12 July 2025 15:45:44 +0000 (0:00:01.004) 0:09:06.557 ********* 2025-07-12 15:47:43.800119 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.800124 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.800129 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.800135 | orchestrator | 2025-07-12 15:47:43.800140 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-12 15:47:43.800146 | orchestrator | Saturday 12 July 2025 15:45:45 +0000 (0:00:00.727) 0:09:07.284 ********* 2025-07-12 15:47:43.800151 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.800156 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.800162 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.800167 | orchestrator | 2025-07-12 15:47:43.800172 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-12 15:47:43.800178 | orchestrator | Saturday 12 July 2025 15:45:46 +0000 (0:00:00.325) 0:09:07.610 ********* 2025-07-12 15:47:43.800183 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.800188 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.800196 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.800202 | orchestrator | 2025-07-12 15:47:43.800207 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-12 15:47:43.800212 | orchestrator | Saturday 12 July 2025 15:45:46 +0000 (0:00:00.328) 0:09:07.938 ********* 2025-07-12 15:47:43.800218 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.800223 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.800228 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.800234 | orchestrator | 2025-07-12 15:47:43.800239 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-12 15:47:43.800244 | orchestrator | Saturday 12 July 2025 15:45:46 +0000 (0:00:00.595) 0:09:08.534 ********* 2025-07-12 15:47:43.800250 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.800255 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.800260 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.800266 | orchestrator | 2025-07-12 15:47:43.800271 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-12 15:47:43.800276 | orchestrator | Saturday 12 July 2025 15:45:47 +0000 (0:00:00.750) 0:09:09.285 ********* 2025-07-12 15:47:43.800282 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.800287 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.800292 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.800297 | orchestrator | 2025-07-12 15:47:43.800303 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-12 15:47:43.800308 | orchestrator | Saturday 12 July 2025 15:45:48 +0000 (0:00:00.721) 0:09:10.006 ********* 2025-07-12 15:47:43.800314 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.800319 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.800325 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.800330 | orchestrator | 2025-07-12 15:47:43.800335 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-12 15:47:43.800341 | orchestrator | Saturday 12 July 2025 15:45:48 +0000 (0:00:00.302) 0:09:10.309 ********* 2025-07-12 15:47:43.800346 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.800354 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.800359 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.800365 | orchestrator | 2025-07-12 15:47:43.800370 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-12 15:47:43.800376 | orchestrator | Saturday 12 July 2025 15:45:49 +0000 (0:00:00.540) 0:09:10.849 ********* 2025-07-12 15:47:43.800381 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.800386 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.800392 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.800397 | orchestrator | 2025-07-12 15:47:43.800403 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-12 15:47:43.800408 | orchestrator | Saturday 12 July 2025 15:45:49 +0000 (0:00:00.333) 0:09:11.183 ********* 2025-07-12 15:47:43.800413 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.800419 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.800424 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.800429 | orchestrator | 2025-07-12 15:47:43.800435 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-12 15:47:43.800440 | orchestrator | Saturday 12 July 2025 15:45:49 +0000 (0:00:00.324) 0:09:11.507 ********* 2025-07-12 15:47:43.800445 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.800451 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.800456 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.800461 | orchestrator | 2025-07-12 15:47:43.800467 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-12 15:47:43.800472 | orchestrator | Saturday 12 July 2025 15:45:50 +0000 (0:00:00.324) 0:09:11.831 ********* 2025-07-12 15:47:43.800477 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.800483 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.800488 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.800493 | orchestrator | 2025-07-12 15:47:43.800499 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-12 15:47:43.800504 | orchestrator | Saturday 12 July 2025 15:45:50 +0000 (0:00:00.641) 0:09:12.473 ********* 2025-07-12 15:47:43.800509 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.800515 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.800520 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.800525 | orchestrator | 2025-07-12 15:47:43.800531 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-12 15:47:43.800536 | orchestrator | Saturday 12 July 2025 15:45:51 +0000 (0:00:00.320) 0:09:12.793 ********* 2025-07-12 15:47:43.800541 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.800547 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.800552 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.800557 | orchestrator | 2025-07-12 15:47:43.800563 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-12 15:47:43.800568 | orchestrator | Saturday 12 July 2025 15:45:51 +0000 (0:00:00.368) 0:09:13.161 ********* 2025-07-12 15:47:43.800574 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.800581 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.800587 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.800592 | orchestrator | 2025-07-12 15:47:43.800598 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-12 15:47:43.800603 | orchestrator | Saturday 12 July 2025 15:45:51 +0000 (0:00:00.335) 0:09:13.497 ********* 2025-07-12 15:47:43.800609 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.800614 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.800619 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.800625 | orchestrator | 2025-07-12 15:47:43.800630 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-07-12 15:47:43.800635 | orchestrator | Saturday 12 July 2025 15:45:52 +0000 (0:00:00.867) 0:09:14.365 ********* 2025-07-12 15:47:43.800641 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.800646 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.800652 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-07-12 15:47:43.800660 | orchestrator | 2025-07-12 15:47:43.800665 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-07-12 15:47:43.800671 | orchestrator | Saturday 12 July 2025 15:45:53 +0000 (0:00:00.339) 0:09:14.704 ********* 2025-07-12 15:47:43.800676 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-12 15:47:43.800681 | orchestrator | 2025-07-12 15:47:43.800689 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-07-12 15:47:43.800695 | orchestrator | Saturday 12 July 2025 15:45:55 +0000 (0:00:02.096) 0:09:16.801 ********* 2025-07-12 15:47:43.800700 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-07-12 15:47:43.800706 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.800712 | orchestrator | 2025-07-12 15:47:43.800717 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-07-12 15:47:43.800723 | orchestrator | Saturday 12 July 2025 15:45:55 +0000 (0:00:00.181) 0:09:16.983 ********* 2025-07-12 15:47:43.800729 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-12 15:47:43.800738 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-12 15:47:43.800744 | orchestrator | 2025-07-12 15:47:43.800750 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-07-12 15:47:43.800755 | orchestrator | Saturday 12 July 2025 15:46:03 +0000 (0:00:08.421) 0:09:25.404 ********* 2025-07-12 15:47:43.800760 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-12 15:47:43.800766 | orchestrator | 2025-07-12 15:47:43.800771 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-07-12 15:47:43.800776 | orchestrator | Saturday 12 July 2025 15:46:07 +0000 (0:00:03.768) 0:09:29.173 ********* 2025-07-12 15:47:43.800782 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:47:43.800787 | orchestrator | 2025-07-12 15:47:43.800792 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-07-12 15:47:43.800798 | orchestrator | Saturday 12 July 2025 15:46:08 +0000 (0:00:00.689) 0:09:29.862 ********* 2025-07-12 15:47:43.800803 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-07-12 15:47:43.800808 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-07-12 15:47:43.800814 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-07-12 15:47:43.800819 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-07-12 15:47:43.800824 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-07-12 15:47:43.800830 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-07-12 15:47:43.800835 | orchestrator | 2025-07-12 15:47:43.800840 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-07-12 15:47:43.800869 | orchestrator | Saturday 12 July 2025 15:46:09 +0000 (0:00:01.223) 0:09:31.086 ********* 2025-07-12 15:47:43.800875 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 15:47:43.800880 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-12 15:47:43.800886 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-12 15:47:43.800891 | orchestrator | 2025-07-12 15:47:43.800897 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-07-12 15:47:43.800905 | orchestrator | Saturday 12 July 2025 15:46:12 +0000 (0:00:02.785) 0:09:33.871 ********* 2025-07-12 15:47:43.800911 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-12 15:47:43.800916 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-12 15:47:43.800921 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:47:43.800927 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-12 15:47:43.800932 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-12 15:47:43.800937 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-07-12 15:47:43.800943 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:47:43.800951 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-07-12 15:47:43.800957 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:47:43.800962 | orchestrator | 2025-07-12 15:47:43.800967 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-07-12 15:47:43.800973 | orchestrator | Saturday 12 July 2025 15:46:13 +0000 (0:00:01.545) 0:09:35.416 ********* 2025-07-12 15:47:43.800978 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:47:43.800984 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:47:43.800989 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:47:43.800994 | orchestrator | 2025-07-12 15:47:43.801000 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-07-12 15:47:43.801005 | orchestrator | Saturday 12 July 2025 15:46:16 +0000 (0:00:02.807) 0:09:38.224 ********* 2025-07-12 15:47:43.801010 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.801016 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.801021 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.801026 | orchestrator | 2025-07-12 15:47:43.801032 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-07-12 15:47:43.801037 | orchestrator | Saturday 12 July 2025 15:46:16 +0000 (0:00:00.303) 0:09:38.527 ********* 2025-07-12 15:47:43.801043 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:47:43.801048 | orchestrator | 2025-07-12 15:47:43.801056 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-07-12 15:47:43.801061 | orchestrator | Saturday 12 July 2025 15:46:17 +0000 (0:00:00.923) 0:09:39.451 ********* 2025-07-12 15:47:43.801067 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:47:43.801072 | orchestrator | 2025-07-12 15:47:43.801077 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-07-12 15:47:43.801083 | orchestrator | Saturday 12 July 2025 15:46:18 +0000 (0:00:00.583) 0:09:40.034 ********* 2025-07-12 15:47:43.801088 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:47:43.801093 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:47:43.801099 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:47:43.801104 | orchestrator | 2025-07-12 15:47:43.801109 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-07-12 15:47:43.801115 | orchestrator | Saturday 12 July 2025 15:46:19 +0000 (0:00:01.169) 0:09:41.203 ********* 2025-07-12 15:47:43.801120 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:47:43.801126 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:47:43.801131 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:47:43.801136 | orchestrator | 2025-07-12 15:47:43.801142 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-07-12 15:47:43.801147 | orchestrator | Saturday 12 July 2025 15:46:21 +0000 (0:00:01.389) 0:09:42.593 ********* 2025-07-12 15:47:43.801152 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:47:43.801158 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:47:43.801163 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:47:43.801169 | orchestrator | 2025-07-12 15:47:43.801174 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-07-12 15:47:43.801179 | orchestrator | Saturday 12 July 2025 15:46:22 +0000 (0:00:01.731) 0:09:44.325 ********* 2025-07-12 15:47:43.801188 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:47:43.801193 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:47:43.801199 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:47:43.801204 | orchestrator | 2025-07-12 15:47:43.801209 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-07-12 15:47:43.801215 | orchestrator | Saturday 12 July 2025 15:46:24 +0000 (0:00:01.976) 0:09:46.301 ********* 2025-07-12 15:47:43.801220 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.801225 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.801231 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.801236 | orchestrator | 2025-07-12 15:47:43.801241 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-12 15:47:43.801247 | orchestrator | Saturday 12 July 2025 15:46:26 +0000 (0:00:01.651) 0:09:47.953 ********* 2025-07-12 15:47:43.801252 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:47:43.801257 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:47:43.801263 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:47:43.801268 | orchestrator | 2025-07-12 15:47:43.801273 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-07-12 15:47:43.801279 | orchestrator | Saturday 12 July 2025 15:46:27 +0000 (0:00:00.703) 0:09:48.657 ********* 2025-07-12 15:47:43.801284 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:47:43.801289 | orchestrator | 2025-07-12 15:47:43.801295 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-07-12 15:47:43.801300 | orchestrator | Saturday 12 July 2025 15:46:27 +0000 (0:00:00.782) 0:09:49.439 ********* 2025-07-12 15:47:43.801305 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.801311 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.801316 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.801322 | orchestrator | 2025-07-12 15:47:43.801327 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-07-12 15:47:43.801332 | orchestrator | Saturday 12 July 2025 15:46:28 +0000 (0:00:00.341) 0:09:49.781 ********* 2025-07-12 15:47:43.801338 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:47:43.801343 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:47:43.801348 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:47:43.801354 | orchestrator | 2025-07-12 15:47:43.801359 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-07-12 15:47:43.801364 | orchestrator | Saturday 12 July 2025 15:46:29 +0000 (0:00:01.180) 0:09:50.962 ********* 2025-07-12 15:47:43.801370 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 15:47:43.801375 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 15:47:43.801380 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 15:47:43.801386 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.801391 | orchestrator | 2025-07-12 15:47:43.801397 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-07-12 15:47:43.801403 | orchestrator | Saturday 12 July 2025 15:46:30 +0000 (0:00:00.900) 0:09:51.862 ********* 2025-07-12 15:47:43.801408 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.801413 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.801418 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.801423 | orchestrator | 2025-07-12 15:47:43.801428 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-07-12 15:47:43.801432 | orchestrator | 2025-07-12 15:47:43.801437 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-12 15:47:43.801442 | orchestrator | Saturday 12 July 2025 15:46:31 +0000 (0:00:00.805) 0:09:52.668 ********* 2025-07-12 15:47:43.801447 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:47:43.801452 | orchestrator | 2025-07-12 15:47:43.801456 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-12 15:47:43.801464 | orchestrator | Saturday 12 July 2025 15:46:31 +0000 (0:00:00.519) 0:09:53.187 ********* 2025-07-12 15:47:43.801469 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:47:43.801474 | orchestrator | 2025-07-12 15:47:43.801479 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-12 15:47:43.801486 | orchestrator | Saturday 12 July 2025 15:46:32 +0000 (0:00:00.905) 0:09:54.093 ********* 2025-07-12 15:47:43.801490 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.801495 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.801500 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.801505 | orchestrator | 2025-07-12 15:47:43.801509 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-12 15:47:43.801514 | orchestrator | Saturday 12 July 2025 15:46:32 +0000 (0:00:00.353) 0:09:54.446 ********* 2025-07-12 15:47:43.801519 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.801524 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.801528 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.801533 | orchestrator | 2025-07-12 15:47:43.801538 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-12 15:47:43.801543 | orchestrator | Saturday 12 July 2025 15:46:33 +0000 (0:00:00.692) 0:09:55.138 ********* 2025-07-12 15:47:43.801547 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.801552 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.801557 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.801562 | orchestrator | 2025-07-12 15:47:43.801566 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-12 15:47:43.801571 | orchestrator | Saturday 12 July 2025 15:46:34 +0000 (0:00:00.701) 0:09:55.840 ********* 2025-07-12 15:47:43.801576 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.801581 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.801585 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.801590 | orchestrator | 2025-07-12 15:47:43.801595 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-12 15:47:43.801600 | orchestrator | Saturday 12 July 2025 15:46:35 +0000 (0:00:01.121) 0:09:56.961 ********* 2025-07-12 15:47:43.801605 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.801609 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.801614 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.801619 | orchestrator | 2025-07-12 15:47:43.801623 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-12 15:47:43.801628 | orchestrator | Saturday 12 July 2025 15:46:35 +0000 (0:00:00.333) 0:09:57.295 ********* 2025-07-12 15:47:43.801633 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.801638 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.801642 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.801647 | orchestrator | 2025-07-12 15:47:43.801652 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-12 15:47:43.801657 | orchestrator | Saturday 12 July 2025 15:46:36 +0000 (0:00:00.317) 0:09:57.612 ********* 2025-07-12 15:47:43.801661 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.801666 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.801671 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.801675 | orchestrator | 2025-07-12 15:47:43.801680 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-12 15:47:43.801685 | orchestrator | Saturday 12 July 2025 15:46:36 +0000 (0:00:00.340) 0:09:57.953 ********* 2025-07-12 15:47:43.801690 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.801694 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.801699 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.801704 | orchestrator | 2025-07-12 15:47:43.801709 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-12 15:47:43.801713 | orchestrator | Saturday 12 July 2025 15:46:37 +0000 (0:00:01.028) 0:09:58.981 ********* 2025-07-12 15:47:43.801722 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.801727 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.801731 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.801736 | orchestrator | 2025-07-12 15:47:43.801741 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-12 15:47:43.801746 | orchestrator | Saturday 12 July 2025 15:46:38 +0000 (0:00:00.788) 0:09:59.769 ********* 2025-07-12 15:47:43.801751 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.801755 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.801760 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.801765 | orchestrator | 2025-07-12 15:47:43.801770 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-12 15:47:43.801775 | orchestrator | Saturday 12 July 2025 15:46:38 +0000 (0:00:00.369) 0:10:00.139 ********* 2025-07-12 15:47:43.801779 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.801784 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.801789 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.801793 | orchestrator | 2025-07-12 15:47:43.801798 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-12 15:47:43.801803 | orchestrator | Saturday 12 July 2025 15:46:38 +0000 (0:00:00.325) 0:10:00.464 ********* 2025-07-12 15:47:43.801808 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.801812 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.801817 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.801822 | orchestrator | 2025-07-12 15:47:43.801829 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-12 15:47:43.801834 | orchestrator | Saturday 12 July 2025 15:46:39 +0000 (0:00:00.680) 0:10:01.145 ********* 2025-07-12 15:47:43.801839 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.801850 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.801855 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.801860 | orchestrator | 2025-07-12 15:47:43.801865 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-12 15:47:43.801870 | orchestrator | Saturday 12 July 2025 15:46:39 +0000 (0:00:00.330) 0:10:01.475 ********* 2025-07-12 15:47:43.801874 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.801879 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.801884 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.801888 | orchestrator | 2025-07-12 15:47:43.801893 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-12 15:47:43.801898 | orchestrator | Saturday 12 July 2025 15:46:40 +0000 (0:00:00.368) 0:10:01.843 ********* 2025-07-12 15:47:43.801903 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.801907 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.801912 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.801917 | orchestrator | 2025-07-12 15:47:43.801922 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-12 15:47:43.801926 | orchestrator | Saturday 12 July 2025 15:46:40 +0000 (0:00:00.311) 0:10:02.155 ********* 2025-07-12 15:47:43.801933 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.801938 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.801943 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.801948 | orchestrator | 2025-07-12 15:47:43.801953 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-12 15:47:43.801957 | orchestrator | Saturday 12 July 2025 15:46:41 +0000 (0:00:00.672) 0:10:02.828 ********* 2025-07-12 15:47:43.801962 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.801967 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.801971 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.801976 | orchestrator | 2025-07-12 15:47:43.801981 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-12 15:47:43.801986 | orchestrator | Saturday 12 July 2025 15:46:41 +0000 (0:00:00.317) 0:10:03.145 ********* 2025-07-12 15:47:43.801990 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.801995 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.802004 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.802009 | orchestrator | 2025-07-12 15:47:43.802027 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-12 15:47:43.802033 | orchestrator | Saturday 12 July 2025 15:46:41 +0000 (0:00:00.406) 0:10:03.552 ********* 2025-07-12 15:47:43.802038 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.802042 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.802047 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.802052 | orchestrator | 2025-07-12 15:47:43.802057 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-07-12 15:47:43.802061 | orchestrator | Saturday 12 July 2025 15:46:42 +0000 (0:00:00.876) 0:10:04.429 ********* 2025-07-12 15:47:43.802066 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:47:43.802071 | orchestrator | 2025-07-12 15:47:43.802075 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-07-12 15:47:43.802080 | orchestrator | Saturday 12 July 2025 15:46:43 +0000 (0:00:00.579) 0:10:05.008 ********* 2025-07-12 15:47:43.802085 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 15:47:43.802090 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-12 15:47:43.802095 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-12 15:47:43.802099 | orchestrator | 2025-07-12 15:47:43.802104 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-07-12 15:47:43.802109 | orchestrator | Saturday 12 July 2025 15:46:45 +0000 (0:00:02.074) 0:10:07.083 ********* 2025-07-12 15:47:43.802114 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-12 15:47:43.802118 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-12 15:47:43.802123 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:47:43.802128 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-12 15:47:43.802133 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-07-12 15:47:43.802137 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:47:43.802142 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-12 15:47:43.802147 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-07-12 15:47:43.802152 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:47:43.802156 | orchestrator | 2025-07-12 15:47:43.802161 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-07-12 15:47:43.802166 | orchestrator | Saturday 12 July 2025 15:46:47 +0000 (0:00:01.525) 0:10:08.608 ********* 2025-07-12 15:47:43.802171 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.802176 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.802180 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.802185 | orchestrator | 2025-07-12 15:47:43.802190 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-07-12 15:47:43.802195 | orchestrator | Saturday 12 July 2025 15:46:47 +0000 (0:00:00.320) 0:10:08.929 ********* 2025-07-12 15:47:43.802200 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:47:43.802205 | orchestrator | 2025-07-12 15:47:43.802209 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-07-12 15:47:43.802214 | orchestrator | Saturday 12 July 2025 15:46:47 +0000 (0:00:00.572) 0:10:09.501 ********* 2025-07-12 15:47:43.802219 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-12 15:47:43.802226 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-12 15:47:43.802232 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-12 15:47:43.802240 | orchestrator | 2025-07-12 15:47:43.802245 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-07-12 15:47:43.802250 | orchestrator | Saturday 12 July 2025 15:46:49 +0000 (0:00:01.065) 0:10:10.567 ********* 2025-07-12 15:47:43.802255 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 15:47:43.802260 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-07-12 15:47:43.802264 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 15:47:43.802269 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-07-12 15:47:43.802274 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 15:47:43.802281 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-07-12 15:47:43.802286 | orchestrator | 2025-07-12 15:47:43.802291 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-07-12 15:47:43.802296 | orchestrator | Saturday 12 July 2025 15:46:53 +0000 (0:00:04.756) 0:10:15.324 ********* 2025-07-12 15:47:43.802301 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 15:47:43.802306 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-12 15:47:43.802310 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 15:47:43.802315 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-12 15:47:43.802320 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 15:47:43.802325 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-12 15:47:43.802329 | orchestrator | 2025-07-12 15:47:43.802334 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-07-12 15:47:43.802339 | orchestrator | Saturday 12 July 2025 15:46:55 +0000 (0:00:02.213) 0:10:17.537 ********* 2025-07-12 15:47:43.802343 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-12 15:47:43.802348 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:47:43.802353 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-12 15:47:43.802358 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:47:43.802363 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-12 15:47:43.802367 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:47:43.802372 | orchestrator | 2025-07-12 15:47:43.802377 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-07-12 15:47:43.802382 | orchestrator | Saturday 12 July 2025 15:46:57 +0000 (0:00:01.139) 0:10:18.676 ********* 2025-07-12 15:47:43.802387 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-07-12 15:47:43.802391 | orchestrator | 2025-07-12 15:47:43.802396 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-07-12 15:47:43.802401 | orchestrator | Saturday 12 July 2025 15:46:57 +0000 (0:00:00.233) 0:10:18.910 ********* 2025-07-12 15:47:43.802406 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 15:47:43.802410 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 15:47:43.802415 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 15:47:43.802420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 15:47:43.802425 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 15:47:43.802435 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.802440 | orchestrator | 2025-07-12 15:47:43.802444 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-07-12 15:47:43.802449 | orchestrator | Saturday 12 July 2025 15:46:58 +0000 (0:00:00.825) 0:10:19.735 ********* 2025-07-12 15:47:43.802454 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 15:47:43.802459 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 15:47:43.802463 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 15:47:43.802468 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 15:47:43.802473 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 15:47:43.802480 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.802485 | orchestrator | 2025-07-12 15:47:43.802490 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-07-12 15:47:43.802495 | orchestrator | Saturday 12 July 2025 15:46:59 +0000 (0:00:01.097) 0:10:20.833 ********* 2025-07-12 15:47:43.802500 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-12 15:47:43.802505 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-12 15:47:43.802509 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-12 15:47:43.802514 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-12 15:47:43.802522 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-12 15:47:43.802527 | orchestrator | 2025-07-12 15:47:43.802531 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-07-12 15:47:43.802536 | orchestrator | Saturday 12 July 2025 15:47:31 +0000 (0:00:31.953) 0:10:52.787 ********* 2025-07-12 15:47:43.802541 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.802546 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.802551 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.802555 | orchestrator | 2025-07-12 15:47:43.802560 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-07-12 15:47:43.802565 | orchestrator | Saturday 12 July 2025 15:47:31 +0000 (0:00:00.336) 0:10:53.124 ********* 2025-07-12 15:47:43.802570 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.802574 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.802579 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.802584 | orchestrator | 2025-07-12 15:47:43.802589 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-07-12 15:47:43.802594 | orchestrator | Saturday 12 July 2025 15:47:31 +0000 (0:00:00.308) 0:10:53.432 ********* 2025-07-12 15:47:43.802598 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:47:43.802603 | orchestrator | 2025-07-12 15:47:43.802608 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-07-12 15:47:43.802613 | orchestrator | Saturday 12 July 2025 15:47:32 +0000 (0:00:00.765) 0:10:54.198 ********* 2025-07-12 15:47:43.802617 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:47:43.802626 | orchestrator | 2025-07-12 15:47:43.802631 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-07-12 15:47:43.802636 | orchestrator | Saturday 12 July 2025 15:47:33 +0000 (0:00:00.521) 0:10:54.719 ********* 2025-07-12 15:47:43.802640 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:47:43.802645 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:47:43.802650 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:47:43.802655 | orchestrator | 2025-07-12 15:47:43.802659 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-07-12 15:47:43.802664 | orchestrator | Saturday 12 July 2025 15:47:34 +0000 (0:00:01.248) 0:10:55.967 ********* 2025-07-12 15:47:43.802669 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:47:43.802673 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:47:43.802678 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:47:43.802683 | orchestrator | 2025-07-12 15:47:43.802688 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-07-12 15:47:43.802692 | orchestrator | Saturday 12 July 2025 15:47:35 +0000 (0:00:01.466) 0:10:57.434 ********* 2025-07-12 15:47:43.802697 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:47:43.802702 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:47:43.802707 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:47:43.802711 | orchestrator | 2025-07-12 15:47:43.802716 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-07-12 15:47:43.802721 | orchestrator | Saturday 12 July 2025 15:47:37 +0000 (0:00:01.702) 0:10:59.136 ********* 2025-07-12 15:47:43.802725 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-12 15:47:43.802730 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-12 15:47:43.802735 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-12 15:47:43.802740 | orchestrator | 2025-07-12 15:47:43.802745 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-12 15:47:43.802750 | orchestrator | Saturday 12 July 2025 15:47:41 +0000 (0:00:03.444) 0:11:02.580 ********* 2025-07-12 15:47:43.802754 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.802759 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.802764 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.802769 | orchestrator | 2025-07-12 15:47:43.802773 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-07-12 15:47:43.802778 | orchestrator | Saturday 12 July 2025 15:47:41 +0000 (0:00:00.301) 0:11:02.882 ********* 2025-07-12 15:47:43.802785 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:47:43.802790 | orchestrator | 2025-07-12 15:47:43.802795 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-07-12 15:47:43.802800 | orchestrator | Saturday 12 July 2025 15:47:41 +0000 (0:00:00.451) 0:11:03.334 ********* 2025-07-12 15:47:43.802804 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.802809 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.802814 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.802819 | orchestrator | 2025-07-12 15:47:43.802824 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-07-12 15:47:43.802829 | orchestrator | Saturday 12 July 2025 15:47:42 +0000 (0:00:00.436) 0:11:03.770 ********* 2025-07-12 15:47:43.802834 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.802838 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:47:43.802864 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:47:43.802870 | orchestrator | 2025-07-12 15:47:43.802875 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-07-12 15:47:43.802879 | orchestrator | Saturday 12 July 2025 15:47:42 +0000 (0:00:00.283) 0:11:04.053 ********* 2025-07-12 15:47:43.802887 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 15:47:43.802892 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 15:47:43.802897 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 15:47:43.802904 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:47:43.802909 | orchestrator | 2025-07-12 15:47:43.802914 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-07-12 15:47:43.802919 | orchestrator | Saturday 12 July 2025 15:47:43 +0000 (0:00:00.509) 0:11:04.563 ********* 2025-07-12 15:47:43.802923 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:47:43.802928 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:47:43.802933 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:47:43.802938 | orchestrator | 2025-07-12 15:47:43.802942 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:47:43.802947 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-07-12 15:47:43.802952 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-07-12 15:47:43.802957 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-07-12 15:47:43.802962 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2025-07-12 15:47:43.802966 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-07-12 15:47:43.802971 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-07-12 15:47:43.802976 | orchestrator | 2025-07-12 15:47:43.802981 | orchestrator | 2025-07-12 15:47:43.802986 | orchestrator | 2025-07-12 15:47:43.802990 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:47:43.802995 | orchestrator | Saturday 12 July 2025 15:47:43 +0000 (0:00:00.198) 0:11:04.761 ********* 2025-07-12 15:47:43.803000 | orchestrator | =============================================================================== 2025-07-12 15:47:43.803004 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 73.89s 2025-07-12 15:47:43.803009 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 39.88s 2025-07-12 15:47:43.803014 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.95s 2025-07-12 15:47:43.803018 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.22s 2025-07-12 15:47:43.803023 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.91s 2025-07-12 15:47:43.803028 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.00s 2025-07-12 15:47:43.803033 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.97s 2025-07-12 15:47:43.803037 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.98s 2025-07-12 15:47:43.803042 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 9.40s 2025-07-12 15:47:43.803047 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.42s 2025-07-12 15:47:43.803052 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.65s 2025-07-12 15:47:43.803056 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.23s 2025-07-12 15:47:43.803061 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.81s 2025-07-12 15:47:43.803066 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.76s 2025-07-12 15:47:43.803073 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.53s 2025-07-12 15:47:43.803078 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.77s 2025-07-12 15:47:43.803083 | orchestrator | ceph-rgw : Systemd start rgw container ---------------------------------- 3.44s 2025-07-12 15:47:43.803088 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.39s 2025-07-12 15:47:43.803092 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.23s 2025-07-12 15:47:43.803097 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.19s 2025-07-12 15:47:43.803105 | orchestrator | 2025-07-12 15:47:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:47:46.815558 | orchestrator | 2025-07-12 15:47:46 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:47:46.817566 | orchestrator | 2025-07-12 15:47:46 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:47:46.819930 | orchestrator | 2025-07-12 15:47:46 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:47:46.819964 | orchestrator | 2025-07-12 15:47:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:47:49.873001 | orchestrator | 2025-07-12 15:47:49 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:47:49.875750 | orchestrator | 2025-07-12 15:47:49 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:47:49.875787 | orchestrator | 2025-07-12 15:47:49 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:47:49.875819 | orchestrator | 2025-07-12 15:47:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:47:52.917187 | orchestrator | 2025-07-12 15:47:52 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:47:52.918152 | orchestrator | 2025-07-12 15:47:52 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:47:52.919912 | orchestrator | 2025-07-12 15:47:52 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:47:52.919941 | orchestrator | 2025-07-12 15:47:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:47:55.963599 | orchestrator | 2025-07-12 15:47:55 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:47:55.965878 | orchestrator | 2025-07-12 15:47:55 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:47:55.967996 | orchestrator | 2025-07-12 15:47:55 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:47:55.968344 | orchestrator | 2025-07-12 15:47:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:47:59.025312 | orchestrator | 2025-07-12 15:47:59 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:47:59.026439 | orchestrator | 2025-07-12 15:47:59 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:47:59.028823 | orchestrator | 2025-07-12 15:47:59 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:47:59.029074 | orchestrator | 2025-07-12 15:47:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:48:02.076569 | orchestrator | 2025-07-12 15:48:02 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:48:02.076667 | orchestrator | 2025-07-12 15:48:02 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:48:02.077074 | orchestrator | 2025-07-12 15:48:02 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:48:02.077128 | orchestrator | 2025-07-12 15:48:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:48:05.120442 | orchestrator | 2025-07-12 15:48:05 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:48:05.122714 | orchestrator | 2025-07-12 15:48:05 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:48:05.123661 | orchestrator | 2025-07-12 15:48:05 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:48:05.123696 | orchestrator | 2025-07-12 15:48:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:48:08.168769 | orchestrator | 2025-07-12 15:48:08 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:48:08.170011 | orchestrator | 2025-07-12 15:48:08 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:48:08.171685 | orchestrator | 2025-07-12 15:48:08 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:48:08.171725 | orchestrator | 2025-07-12 15:48:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:48:11.219568 | orchestrator | 2025-07-12 15:48:11 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:48:11.220144 | orchestrator | 2025-07-12 15:48:11 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:48:11.223958 | orchestrator | 2025-07-12 15:48:11 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:48:11.224033 | orchestrator | 2025-07-12 15:48:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:48:14.272741 | orchestrator | 2025-07-12 15:48:14 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:48:14.273110 | orchestrator | 2025-07-12 15:48:14 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:48:14.275072 | orchestrator | 2025-07-12 15:48:14 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:48:14.275245 | orchestrator | 2025-07-12 15:48:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:48:17.321810 | orchestrator | 2025-07-12 15:48:17 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:48:17.322969 | orchestrator | 2025-07-12 15:48:17 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:48:17.325078 | orchestrator | 2025-07-12 15:48:17 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:48:17.325105 | orchestrator | 2025-07-12 15:48:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:48:20.373373 | orchestrator | 2025-07-12 15:48:20 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:48:20.374184 | orchestrator | 2025-07-12 15:48:20 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:48:20.375919 | orchestrator | 2025-07-12 15:48:20 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:48:20.376027 | orchestrator | 2025-07-12 15:48:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:48:23.421900 | orchestrator | 2025-07-12 15:48:23 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:48:23.423031 | orchestrator | 2025-07-12 15:48:23 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:48:23.426118 | orchestrator | 2025-07-12 15:48:23 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:48:23.426144 | orchestrator | 2025-07-12 15:48:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:48:26.482093 | orchestrator | 2025-07-12 15:48:26 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:48:26.484152 | orchestrator | 2025-07-12 15:48:26 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:48:26.487480 | orchestrator | 2025-07-12 15:48:26 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:48:26.487530 | orchestrator | 2025-07-12 15:48:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:48:29.531850 | orchestrator | 2025-07-12 15:48:29 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:48:29.532637 | orchestrator | 2025-07-12 15:48:29 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:48:29.534173 | orchestrator | 2025-07-12 15:48:29 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:48:29.534202 | orchestrator | 2025-07-12 15:48:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:48:32.580880 | orchestrator | 2025-07-12 15:48:32 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:48:32.582554 | orchestrator | 2025-07-12 15:48:32 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:48:32.584688 | orchestrator | 2025-07-12 15:48:32 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:48:32.584754 | orchestrator | 2025-07-12 15:48:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:48:35.629685 | orchestrator | 2025-07-12 15:48:35 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:48:35.631791 | orchestrator | 2025-07-12 15:48:35 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:48:35.635307 | orchestrator | 2025-07-12 15:48:35 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:48:35.635340 | orchestrator | 2025-07-12 15:48:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:48:38.680334 | orchestrator | 2025-07-12 15:48:38 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state STARTED 2025-07-12 15:48:38.681942 | orchestrator | 2025-07-12 15:48:38 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:48:38.683127 | orchestrator | 2025-07-12 15:48:38 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:48:38.683322 | orchestrator | 2025-07-12 15:48:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:48:41.724140 | orchestrator | 2025-07-12 15:48:41 | INFO  | Task d5c23244-8eda-4bdb-ae87-51741652b17e is in state SUCCESS 2025-07-12 15:48:41.726239 | orchestrator | 2025-07-12 15:48:41.726291 | orchestrator | 2025-07-12 15:48:41.726312 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 15:48:41.726339 | orchestrator | 2025-07-12 15:48:41.726357 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 15:48:41.726369 | orchestrator | Saturday 12 July 2025 15:45:38 +0000 (0:00:00.257) 0:00:00.257 ********* 2025-07-12 15:48:41.726381 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:48:41.726392 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:48:41.726403 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:48:41.726414 | orchestrator | 2025-07-12 15:48:41.726425 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 15:48:41.726436 | orchestrator | Saturday 12 July 2025 15:45:38 +0000 (0:00:00.303) 0:00:00.561 ********* 2025-07-12 15:48:41.726447 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-07-12 15:48:41.726458 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-07-12 15:48:41.726486 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-07-12 15:48:41.726519 | orchestrator | 2025-07-12 15:48:41.726531 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-07-12 15:48:41.726541 | orchestrator | 2025-07-12 15:48:41.726552 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-07-12 15:48:41.726562 | orchestrator | Saturday 12 July 2025 15:45:38 +0000 (0:00:00.451) 0:00:01.012 ********* 2025-07-12 15:48:41.726573 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:48:41.726584 | orchestrator | 2025-07-12 15:48:41.726595 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-07-12 15:48:41.726605 | orchestrator | Saturday 12 July 2025 15:45:39 +0000 (0:00:00.509) 0:00:01.522 ********* 2025-07-12 15:48:41.726616 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-12 15:48:41.726627 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-12 15:48:41.726637 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-12 15:48:41.726648 | orchestrator | 2025-07-12 15:48:41.726658 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-07-12 15:48:41.726669 | orchestrator | Saturday 12 July 2025 15:45:40 +0000 (0:00:00.714) 0:00:02.236 ********* 2025-07-12 15:48:41.726683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 15:48:41.726699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 15:48:41.726726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 15:48:41.726754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 15:48:41.726769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 15:48:41.726782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 15:48:41.726794 | orchestrator | 2025-07-12 15:48:41.726839 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-07-12 15:48:41.726853 | orchestrator | Saturday 12 July 2025 15:45:41 +0000 (0:00:01.718) 0:00:03.955 ********* 2025-07-12 15:48:41.726866 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:48:41.726878 | orchestrator | 2025-07-12 15:48:41.726891 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-07-12 15:48:41.726903 | orchestrator | Saturday 12 July 2025 15:45:42 +0000 (0:00:00.588) 0:00:04.543 ********* 2025-07-12 15:48:41.726924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 15:48:41.726952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 15:48:41.726966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 15:48:41.726980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 15:48:41.727002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 15:48:41.727030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 15:48:41.727043 | orchestrator | 2025-07-12 15:48:41.727055 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-07-12 15:48:41.727067 | orchestrator | Saturday 12 July 2025 15:45:44 +0000 (0:00:02.513) 0:00:07.057 ********* 2025-07-12 15:48:41.727080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 15:48:41.727094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 15:48:41.727108 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:48:41.727128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 15:48:41.727154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 15:48:41.727166 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:48:41.727178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 15:48:41.727190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 15:48:41.727202 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:48:41.727216 | orchestrator | 2025-07-12 15:48:41.727235 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-07-12 15:48:41.727263 | orchestrator | Saturday 12 July 2025 15:45:46 +0000 (0:00:01.749) 0:00:08.807 ********* 2025-07-12 15:48:41.727290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 15:48:41.727319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 15:48:41.727340 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:48:41.727359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 15:48:41.727380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 15:48:41.727401 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:48:41.727419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 15:48:41.727436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 15:48:41.727447 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:48:41.727570 | orchestrator | 2025-07-12 15:48:41.727583 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-07-12 15:48:41.727594 | orchestrator | Saturday 12 July 2025 15:45:47 +0000 (0:00:00.933) 0:00:09.740 ********* 2025-07-12 15:48:41.727605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 15:48:41.727617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 15:48:41.727644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 15:48:41.727676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 15:48:41.727700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 15:48:41.727723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 15:48:41.727759 | orchestrator | 2025-07-12 15:48:41.727781 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-07-12 15:48:41.727802 | orchestrator | Saturday 12 July 2025 15:45:50 +0000 (0:00:02.489) 0:00:12.230 ********* 2025-07-12 15:48:41.727850 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:48:41.727862 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:48:41.727873 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:48:41.727984 | orchestrator | 2025-07-12 15:48:41.728000 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-07-12 15:48:41.728011 | orchestrator | Saturday 12 July 2025 15:45:53 +0000 (0:00:03.799) 0:00:16.030 ********* 2025-07-12 15:48:41.728022 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:48:41.728032 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:48:41.728043 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:48:41.728053 | orchestrator | 2025-07-12 15:48:41.728063 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-07-12 15:48:41.728074 | orchestrator | Saturday 12 July 2025 15:45:55 +0000 (0:00:01.624) 0:00:17.654 ********* 2025-07-12 15:48:41.728096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 15:48:41.728116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 15:48:41.728130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 15:48:41.728151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 15:48:41.728197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 15:48:41.728224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 15:48:41.728237 | orchestrator | 2025-07-12 15:48:41.728248 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-07-12 15:48:41.728258 | orchestrator | Saturday 12 July 2025 15:45:57 +0000 (0:00:01.772) 0:00:19.427 ********* 2025-07-12 15:48:41.728269 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:48:41.728279 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:48:41.728290 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:48:41.728300 | orchestrator | 2025-07-12 15:48:41.728311 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-07-12 15:48:41.728321 | orchestrator | Saturday 12 July 2025 15:45:57 +0000 (0:00:00.229) 0:00:19.656 ********* 2025-07-12 15:48:41.728332 | orchestrator | 2025-07-12 15:48:41.728342 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-07-12 15:48:41.728353 | orchestrator | Saturday 12 July 2025 15:45:57 +0000 (0:00:00.060) 0:00:19.717 ********* 2025-07-12 15:48:41.728371 | orchestrator | 2025-07-12 15:48:41.728382 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-07-12 15:48:41.728392 | orchestrator | Saturday 12 July 2025 15:45:57 +0000 (0:00:00.058) 0:00:19.775 ********* 2025-07-12 15:48:41.728402 | orchestrator | 2025-07-12 15:48:41.728413 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-07-12 15:48:41.728423 | orchestrator | Saturday 12 July 2025 15:45:57 +0000 (0:00:00.173) 0:00:19.949 ********* 2025-07-12 15:48:41.728433 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:48:41.728444 | orchestrator | 2025-07-12 15:48:41.728454 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-07-12 15:48:41.728465 | orchestrator | Saturday 12 July 2025 15:45:57 +0000 (0:00:00.179) 0:00:20.129 ********* 2025-07-12 15:48:41.728475 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:48:41.728486 | orchestrator | 2025-07-12 15:48:41.728496 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-07-12 15:48:41.728507 | orchestrator | Saturday 12 July 2025 15:45:58 +0000 (0:00:00.208) 0:00:20.337 ********* 2025-07-12 15:48:41.728517 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:48:41.728528 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:48:41.728539 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:48:41.728549 | orchestrator | 2025-07-12 15:48:41.728560 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-07-12 15:48:41.728570 | orchestrator | Saturday 12 July 2025 15:47:09 +0000 (0:01:11.578) 0:01:31.915 ********* 2025-07-12 15:48:41.728581 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:48:41.728591 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:48:41.728601 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:48:41.728612 | orchestrator | 2025-07-12 15:48:41.728624 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-07-12 15:48:41.728636 | orchestrator | Saturday 12 July 2025 15:48:28 +0000 (0:01:18.317) 0:02:50.232 ********* 2025-07-12 15:48:41.728649 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:48:41.728661 | orchestrator | 2025-07-12 15:48:41.728673 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-07-12 15:48:41.728685 | orchestrator | Saturday 12 July 2025 15:48:28 +0000 (0:00:00.719) 0:02:50.951 ********* 2025-07-12 15:48:41.728698 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:48:41.728710 | orchestrator | 2025-07-12 15:48:41.728722 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-07-12 15:48:41.728734 | orchestrator | Saturday 12 July 2025 15:48:31 +0000 (0:00:02.521) 0:02:53.473 ********* 2025-07-12 15:48:41.728746 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:48:41.728757 | orchestrator | 2025-07-12 15:48:41.728769 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-07-12 15:48:41.728781 | orchestrator | Saturday 12 July 2025 15:48:33 +0000 (0:00:02.299) 0:02:55.772 ********* 2025-07-12 15:48:41.728793 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:48:41.728805 | orchestrator | 2025-07-12 15:48:41.728898 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-07-12 15:48:41.728917 | orchestrator | Saturday 12 July 2025 15:48:36 +0000 (0:00:02.799) 0:02:58.571 ********* 2025-07-12 15:48:41.728936 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:48:41.728949 | orchestrator | 2025-07-12 15:48:41.728971 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:48:41.728983 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 15:48:41.729018 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 15:48:41.729030 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 15:48:41.729049 | orchestrator | 2025-07-12 15:48:41.729059 | orchestrator | 2025-07-12 15:48:41.729070 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:48:41.729087 | orchestrator | Saturday 12 July 2025 15:48:39 +0000 (0:00:02.876) 0:03:01.448 ********* 2025-07-12 15:48:41.729098 | orchestrator | =============================================================================== 2025-07-12 15:48:41.729108 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 78.32s 2025-07-12 15:48:41.729119 | orchestrator | opensearch : Restart opensearch container ------------------------------ 71.58s 2025-07-12 15:48:41.729129 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.80s 2025-07-12 15:48:41.729140 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.88s 2025-07-12 15:48:41.729150 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.80s 2025-07-12 15:48:41.729160 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.52s 2025-07-12 15:48:41.729171 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.51s 2025-07-12 15:48:41.729181 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.49s 2025-07-12 15:48:41.729192 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.30s 2025-07-12 15:48:41.729202 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.77s 2025-07-12 15:48:41.729213 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.75s 2025-07-12 15:48:41.729223 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.72s 2025-07-12 15:48:41.729234 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.62s 2025-07-12 15:48:41.729244 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.93s 2025-07-12 15:48:41.729254 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.72s 2025-07-12 15:48:41.729265 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.71s 2025-07-12 15:48:41.729275 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.59s 2025-07-12 15:48:41.729286 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.51s 2025-07-12 15:48:41.729296 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.45s 2025-07-12 15:48:41.729306 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2025-07-12 15:48:41.729317 | orchestrator | 2025-07-12 15:48:41 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:48:41.729328 | orchestrator | 2025-07-12 15:48:41 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:48:41.729338 | orchestrator | 2025-07-12 15:48:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:48:44.780963 | orchestrator | 2025-07-12 15:48:44 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:48:44.783017 | orchestrator | 2025-07-12 15:48:44 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:48:44.783099 | orchestrator | 2025-07-12 15:48:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:48:47.835613 | orchestrator | 2025-07-12 15:48:47 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:48:47.839613 | orchestrator | 2025-07-12 15:48:47 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state STARTED 2025-07-12 15:48:47.839661 | orchestrator | 2025-07-12 15:48:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:48:50.893122 | orchestrator | 2025-07-12 15:48:50 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:48:50.894336 | orchestrator | 2025-07-12 15:48:50 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:48:50.895947 | orchestrator | 2025-07-12 15:48:50 | INFO  | Task 55c74a58-3930-4b80-b30b-34ad4727a529 is in state STARTED 2025-07-12 15:48:50.899439 | orchestrator | 2025-07-12 15:48:50 | INFO  | Task 366160b5-72ab-4fa9-8e1a-81adc2abc1de is in state SUCCESS 2025-07-12 15:48:50.901590 | orchestrator | 2025-07-12 15:48:50.901638 | orchestrator | 2025-07-12 15:48:50.901651 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-07-12 15:48:50.901663 | orchestrator | 2025-07-12 15:48:50.901674 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-07-12 15:48:50.901685 | orchestrator | Saturday 12 July 2025 15:45:38 +0000 (0:00:00.103) 0:00:00.103 ********* 2025-07-12 15:48:50.901696 | orchestrator | ok: [localhost] => { 2025-07-12 15:48:50.901708 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-07-12 15:48:50.901719 | orchestrator | } 2025-07-12 15:48:50.901730 | orchestrator | 2025-07-12 15:48:50.901741 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-07-12 15:48:50.901752 | orchestrator | Saturday 12 July 2025 15:45:38 +0000 (0:00:00.045) 0:00:00.149 ********* 2025-07-12 15:48:50.901763 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-07-12 15:48:50.901775 | orchestrator | ...ignoring 2025-07-12 15:48:50.901786 | orchestrator | 2025-07-12 15:48:50.901796 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-07-12 15:48:50.901851 | orchestrator | Saturday 12 July 2025 15:45:41 +0000 (0:00:02.901) 0:00:03.051 ********* 2025-07-12 15:48:50.901864 | orchestrator | skipping: [localhost] 2025-07-12 15:48:50.901875 | orchestrator | 2025-07-12 15:48:50.901885 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-07-12 15:48:50.901896 | orchestrator | Saturday 12 July 2025 15:45:41 +0000 (0:00:00.057) 0:00:03.109 ********* 2025-07-12 15:48:50.901906 | orchestrator | ok: [localhost] 2025-07-12 15:48:50.901917 | orchestrator | 2025-07-12 15:48:50.901927 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 15:48:50.901938 | orchestrator | 2025-07-12 15:48:50.901949 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 15:48:50.901959 | orchestrator | Saturday 12 July 2025 15:45:41 +0000 (0:00:00.142) 0:00:03.251 ********* 2025-07-12 15:48:50.901969 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:48:50.901980 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:48:50.901991 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:48:50.902001 | orchestrator | 2025-07-12 15:48:50.902012 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 15:48:50.902022 | orchestrator | Saturday 12 July 2025 15:45:41 +0000 (0:00:00.288) 0:00:03.540 ********* 2025-07-12 15:48:50.902033 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-07-12 15:48:50.902096 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-07-12 15:48:50.902108 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-07-12 15:48:50.902119 | orchestrator | 2025-07-12 15:48:50.902129 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-07-12 15:48:50.902140 | orchestrator | 2025-07-12 15:48:50.902150 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-07-12 15:48:50.902161 | orchestrator | Saturday 12 July 2025 15:45:42 +0000 (0:00:00.880) 0:00:04.420 ********* 2025-07-12 15:48:50.902172 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 15:48:50.902183 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-07-12 15:48:50.902196 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-07-12 15:48:50.902208 | orchestrator | 2025-07-12 15:48:50.902221 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-12 15:48:50.902252 | orchestrator | Saturday 12 July 2025 15:45:42 +0000 (0:00:00.504) 0:00:04.925 ********* 2025-07-12 15:48:50.902264 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:48:50.902278 | orchestrator | 2025-07-12 15:48:50.902289 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-07-12 15:48:50.902301 | orchestrator | Saturday 12 July 2025 15:45:43 +0000 (0:00:00.561) 0:00:05.486 ********* 2025-07-12 15:48:50.902338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 15:48:50.902364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 15:48:50.902387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 15:48:50.902401 | orchestrator | 2025-07-12 15:48:50.902420 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-07-12 15:48:50.902432 | orchestrator | Saturday 12 July 2025 15:45:46 +0000 (0:00:03.487) 0:00:08.974 ********* 2025-07-12 15:48:50.902442 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:48:50.902453 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:48:50.902464 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:48:50.902474 | orchestrator | 2025-07-12 15:48:50.902485 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-07-12 15:48:50.902496 | orchestrator | Saturday 12 July 2025 15:45:47 +0000 (0:00:00.793) 0:00:09.767 ********* 2025-07-12 15:48:50.902506 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:48:50.902517 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:48:50.902527 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:48:50.902538 | orchestrator | 2025-07-12 15:48:50.902548 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-07-12 15:48:50.902559 | orchestrator | Saturday 12 July 2025 15:45:49 +0000 (0:00:01.547) 0:00:11.315 ********* 2025-07-12 15:48:50.902575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 15:48:50.902604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 15:48:50.902622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 15:48:50.902644 | orchestrator | 2025-07-12 15:48:50.902656 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-07-12 15:48:50.902666 | orchestrator | Saturday 12 July 2025 15:45:53 +0000 (0:00:04.677) 0:00:15.993 ********* 2025-07-12 15:48:50.902677 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:48:50.902687 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:48:50.902698 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:48:50.902708 | orchestrator | 2025-07-12 15:48:50.902719 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-07-12 15:48:50.902730 | orchestrator | Saturday 12 July 2025 15:45:55 +0000 (0:00:01.214) 0:00:17.207 ********* 2025-07-12 15:48:50.902740 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:48:50.902750 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:48:50.902761 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:48:50.902771 | orchestrator | 2025-07-12 15:48:50.902782 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-12 15:48:50.902792 | orchestrator | Saturday 12 July 2025 15:45:58 +0000 (0:00:03.585) 0:00:20.792 ********* 2025-07-12 15:48:50.902824 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:48:50.902838 | orchestrator | 2025-07-12 15:48:50.902848 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-07-12 15:48:50.902859 | orchestrator | Saturday 12 July 2025 15:45:59 +0000 (0:00:00.582) 0:00:21.376 ********* 2025-07-12 15:48:50.902880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 15:48:50.902893 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:48:50.902910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 15:48:50.902929 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:48:50.902948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 15:48:50.902961 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:48:50.902972 | orchestrator | 2025-07-12 15:48:50.902982 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-07-12 15:48:50.902993 | orchestrator | Saturday 12 July 2025 15:46:01 +0000 (0:00:02.471) 0:00:23.847 ********* 2025-07-12 15:48:50.903009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 15:48:50.903028 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:48:50.903046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 15:48:50.903058 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:48:50.903075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 15:48:50.903099 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:48:50.903110 | orchestrator | 2025-07-12 15:48:50.903121 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-07-12 15:48:50.903131 | orchestrator | Saturday 12 July 2025 15:46:04 +0000 (0:00:02.264) 0:00:26.112 ********* 2025-07-12 15:48:50.903149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 15:48:50.903161 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:48:50.903177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 15:48:50.903196 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:48:50.903208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 15:48:50.903220 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:48:50.903230 | orchestrator | 2025-07-12 15:48:50.903241 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-07-12 15:48:50.903252 | orchestrator | Saturday 12 July 2025 15:46:06 +0000 (0:00:02.291) 0:00:28.404 ********* 2025-07-12 15:48:50.903276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 15:48:50.903297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 15:48:50.903324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 15:48:50.903344 | orchestrator | 2025-07-12 15:48:50.903355 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-07-12 15:48:50.903365 | orchestrator | Saturday 12 July 2025 15:46:09 +0000 (0:00:03.073) 0:00:31.477 ********* 2025-07-12 15:48:50.903376 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:48:50.903386 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:48:50.903397 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:48:50.903407 | orchestrator | 2025-07-12 15:48:50.903418 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-07-12 15:48:50.903429 | orchestrator | Saturday 12 July 2025 15:46:10 +0000 (0:00:01.267) 0:00:32.745 ********* 2025-07-12 15:48:50.903439 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:48:50.903450 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:48:50.903461 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:48:50.903472 | orchestrator | 2025-07-12 15:48:50.903482 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-07-12 15:48:50.903493 | orchestrator | Saturday 12 July 2025 15:46:11 +0000 (0:00:00.376) 0:00:33.121 ********* 2025-07-12 15:48:50.903503 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:48:50.903514 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:48:50.903524 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:48:50.903535 | orchestrator | 2025-07-12 15:48:50.903546 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-07-12 15:48:50.903556 | orchestrator | Saturday 12 July 2025 15:46:11 +0000 (0:00:00.358) 0:00:33.480 ********* 2025-07-12 15:48:50.903567 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-07-12 15:48:50.903578 | orchestrator | ...ignoring 2025-07-12 15:48:50.903589 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-07-12 15:48:50.903600 | orchestrator | ...ignoring 2025-07-12 15:48:50.903610 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-07-12 15:48:50.903621 | orchestrator | ...ignoring 2025-07-12 15:48:50.903631 | orchestrator | 2025-07-12 15:48:50.903642 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-07-12 15:48:50.903652 | orchestrator | Saturday 12 July 2025 15:46:22 +0000 (0:00:10.953) 0:00:44.434 ********* 2025-07-12 15:48:50.903663 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:48:50.903673 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:48:50.903684 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:48:50.903694 | orchestrator | 2025-07-12 15:48:50.903705 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-07-12 15:48:50.903724 | orchestrator | Saturday 12 July 2025 15:46:23 +0000 (0:00:00.774) 0:00:45.208 ********* 2025-07-12 15:48:50.903734 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:48:50.903745 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:48:50.903756 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:48:50.903766 | orchestrator | 2025-07-12 15:48:50.903777 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-07-12 15:48:50.903787 | orchestrator | Saturday 12 July 2025 15:46:23 +0000 (0:00:00.528) 0:00:45.737 ********* 2025-07-12 15:48:50.903798 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:48:50.903869 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:48:50.903888 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:48:50.903901 | orchestrator | 2025-07-12 15:48:50.903912 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-07-12 15:48:50.903923 | orchestrator | Saturday 12 July 2025 15:46:24 +0000 (0:00:00.407) 0:00:46.145 ********* 2025-07-12 15:48:50.903933 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:48:50.903943 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:48:50.903954 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:48:50.903965 | orchestrator | 2025-07-12 15:48:50.903975 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-07-12 15:48:50.903993 | orchestrator | Saturday 12 July 2025 15:46:24 +0000 (0:00:00.446) 0:00:46.591 ********* 2025-07-12 15:48:50.904004 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:48:50.904015 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:48:50.904026 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:48:50.904036 | orchestrator | 2025-07-12 15:48:50.904047 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-07-12 15:48:50.904057 | orchestrator | Saturday 12 July 2025 15:46:25 +0000 (0:00:00.665) 0:00:47.256 ********* 2025-07-12 15:48:50.904068 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:48:50.904079 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:48:50.904089 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:48:50.904100 | orchestrator | 2025-07-12 15:48:50.904111 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-12 15:48:50.904121 | orchestrator | Saturday 12 July 2025 15:46:25 +0000 (0:00:00.474) 0:00:47.730 ********* 2025-07-12 15:48:50.904132 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:48:50.904142 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:48:50.904153 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-07-12 15:48:50.904163 | orchestrator | 2025-07-12 15:48:50.904174 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-07-12 15:48:50.904190 | orchestrator | Saturday 12 July 2025 15:46:26 +0000 (0:00:00.400) 0:00:48.131 ********* 2025-07-12 15:48:50.904201 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:48:50.904212 | orchestrator | 2025-07-12 15:48:50.904222 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-07-12 15:48:50.904233 | orchestrator | Saturday 12 July 2025 15:46:37 +0000 (0:00:10.982) 0:00:59.114 ********* 2025-07-12 15:48:50.904243 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:48:50.904254 | orchestrator | 2025-07-12 15:48:50.904264 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-12 15:48:50.904275 | orchestrator | Saturday 12 July 2025 15:46:37 +0000 (0:00:00.127) 0:00:59.241 ********* 2025-07-12 15:48:50.904285 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:48:50.904296 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:48:50.904306 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:48:50.904317 | orchestrator | 2025-07-12 15:48:50.904328 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-07-12 15:48:50.904338 | orchestrator | Saturday 12 July 2025 15:46:38 +0000 (0:00:01.105) 0:01:00.346 ********* 2025-07-12 15:48:50.904348 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:48:50.904359 | orchestrator | 2025-07-12 15:48:50.904369 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-07-12 15:48:50.904388 | orchestrator | Saturday 12 July 2025 15:46:46 +0000 (0:00:07.977) 0:01:08.323 ********* 2025-07-12 15:48:50.904398 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:48:50.904409 | orchestrator | 2025-07-12 15:48:50.904419 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-07-12 15:48:50.904430 | orchestrator | Saturday 12 July 2025 15:46:47 +0000 (0:00:01.614) 0:01:09.938 ********* 2025-07-12 15:48:50.904440 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:48:50.904451 | orchestrator | 2025-07-12 15:48:50.904462 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-07-12 15:48:50.904472 | orchestrator | Saturday 12 July 2025 15:46:50 +0000 (0:00:02.511) 0:01:12.450 ********* 2025-07-12 15:48:50.904483 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:48:50.904494 | orchestrator | 2025-07-12 15:48:50.904504 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-07-12 15:48:50.904514 | orchestrator | Saturday 12 July 2025 15:46:50 +0000 (0:00:00.116) 0:01:12.566 ********* 2025-07-12 15:48:50.904525 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:48:50.904536 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:48:50.904546 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:48:50.904556 | orchestrator | 2025-07-12 15:48:50.904567 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-07-12 15:48:50.904577 | orchestrator | Saturday 12 July 2025 15:46:51 +0000 (0:00:00.503) 0:01:13.070 ********* 2025-07-12 15:48:50.904588 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:48:50.904599 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-07-12 15:48:50.904609 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:48:50.904620 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:48:50.904630 | orchestrator | 2025-07-12 15:48:50.904641 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-07-12 15:48:50.904651 | orchestrator | skipping: no hosts matched 2025-07-12 15:48:50.904662 | orchestrator | 2025-07-12 15:48:50.904673 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-07-12 15:48:50.904683 | orchestrator | 2025-07-12 15:48:50.904694 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-07-12 15:48:50.904704 | orchestrator | Saturday 12 July 2025 15:46:51 +0000 (0:00:00.319) 0:01:13.389 ********* 2025-07-12 15:48:50.904715 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:48:50.904725 | orchestrator | 2025-07-12 15:48:50.904736 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-07-12 15:48:50.904746 | orchestrator | Saturday 12 July 2025 15:47:15 +0000 (0:00:24.562) 0:01:37.952 ********* 2025-07-12 15:48:50.904757 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:48:50.904767 | orchestrator | 2025-07-12 15:48:50.904778 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-07-12 15:48:50.904788 | orchestrator | Saturday 12 July 2025 15:47:31 +0000 (0:00:15.593) 0:01:53.546 ********* 2025-07-12 15:48:50.904799 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:48:50.904873 | orchestrator | 2025-07-12 15:48:50.904885 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-07-12 15:48:50.904896 | orchestrator | 2025-07-12 15:48:50.904907 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-07-12 15:48:50.904917 | orchestrator | Saturday 12 July 2025 15:47:34 +0000 (0:00:02.486) 0:01:56.032 ********* 2025-07-12 15:48:50.904928 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:48:50.904938 | orchestrator | 2025-07-12 15:48:50.904949 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-07-12 15:48:50.904967 | orchestrator | Saturday 12 July 2025 15:47:52 +0000 (0:00:18.393) 0:02:14.426 ********* 2025-07-12 15:48:50.904978 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:48:50.904988 | orchestrator | 2025-07-12 15:48:50.904999 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-07-12 15:48:50.905010 | orchestrator | Saturday 12 July 2025 15:48:12 +0000 (0:00:20.570) 0:02:34.997 ********* 2025-07-12 15:48:50.905028 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:48:50.905039 | orchestrator | 2025-07-12 15:48:50.905050 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-07-12 15:48:50.905061 | orchestrator | 2025-07-12 15:48:50.905071 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-07-12 15:48:50.905082 | orchestrator | Saturday 12 July 2025 15:48:15 +0000 (0:00:02.840) 0:02:37.837 ********* 2025-07-12 15:48:50.905093 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:48:50.905103 | orchestrator | 2025-07-12 15:48:50.905114 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-07-12 15:48:50.905124 | orchestrator | Saturday 12 July 2025 15:48:27 +0000 (0:00:11.987) 0:02:49.825 ********* 2025-07-12 15:48:50.905135 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:48:50.905145 | orchestrator | 2025-07-12 15:48:50.905156 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-07-12 15:48:50.905166 | orchestrator | Saturday 12 July 2025 15:48:33 +0000 (0:00:05.612) 0:02:55.437 ********* 2025-07-12 15:48:50.905182 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:48:50.905193 | orchestrator | 2025-07-12 15:48:50.905204 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-07-12 15:48:50.905214 | orchestrator | 2025-07-12 15:48:50.905225 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-07-12 15:48:50.905235 | orchestrator | Saturday 12 July 2025 15:48:35 +0000 (0:00:02.382) 0:02:57.820 ********* 2025-07-12 15:48:50.905246 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:48:50.905256 | orchestrator | 2025-07-12 15:48:50.905267 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-07-12 15:48:50.905278 | orchestrator | Saturday 12 July 2025 15:48:36 +0000 (0:00:00.570) 0:02:58.391 ********* 2025-07-12 15:48:50.905288 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:48:50.905299 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:48:50.905309 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:48:50.905318 | orchestrator | 2025-07-12 15:48:50.905328 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-07-12 15:48:50.905337 | orchestrator | Saturday 12 July 2025 15:48:38 +0000 (0:00:02.552) 0:03:00.943 ********* 2025-07-12 15:48:50.905347 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:48:50.905356 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:48:50.905365 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:48:50.905375 | orchestrator | 2025-07-12 15:48:50.905385 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-07-12 15:48:50.905394 | orchestrator | Saturday 12 July 2025 15:48:41 +0000 (0:00:02.251) 0:03:03.194 ********* 2025-07-12 15:48:50.905404 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:48:50.905413 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:48:50.905423 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:48:50.905432 | orchestrator | 2025-07-12 15:48:50.905441 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-07-12 15:48:50.905451 | orchestrator | Saturday 12 July 2025 15:48:43 +0000 (0:00:02.223) 0:03:05.418 ********* 2025-07-12 15:48:50.905460 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:48:50.905470 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:48:50.905479 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:48:50.905488 | orchestrator | 2025-07-12 15:48:50.905498 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-07-12 15:48:50.905507 | orchestrator | Saturday 12 July 2025 15:48:45 +0000 (0:00:02.156) 0:03:07.575 ********* 2025-07-12 15:48:50.905517 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:48:50.905526 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:48:50.905535 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:48:50.905545 | orchestrator | 2025-07-12 15:48:50.905554 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-07-12 15:48:50.905571 | orchestrator | Saturday 12 July 2025 15:48:48 +0000 (0:00:02.946) 0:03:10.521 ********* 2025-07-12 15:48:50.905581 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:48:50.905590 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:48:50.905599 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:48:50.905609 | orchestrator | 2025-07-12 15:48:50.905618 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:48:50.905628 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-07-12 15:48:50.905638 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-07-12 15:48:50.905649 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-07-12 15:48:50.905658 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-07-12 15:48:50.905668 | orchestrator | 2025-07-12 15:48:50.905677 | orchestrator | 2025-07-12 15:48:50.905687 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:48:50.905696 | orchestrator | Saturday 12 July 2025 15:48:48 +0000 (0:00:00.225) 0:03:10.747 ********* 2025-07-12 15:48:50.905705 | orchestrator | =============================================================================== 2025-07-12 15:48:50.905715 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 42.96s 2025-07-12 15:48:50.905724 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 36.16s 2025-07-12 15:48:50.905739 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.99s 2025-07-12 15:48:50.905749 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.98s 2025-07-12 15:48:50.905758 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.95s 2025-07-12 15:48:50.905768 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.98s 2025-07-12 15:48:50.905777 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 5.61s 2025-07-12 15:48:50.905787 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.33s 2025-07-12 15:48:50.905796 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.68s 2025-07-12 15:48:50.905827 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.59s 2025-07-12 15:48:50.905837 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.49s 2025-07-12 15:48:50.905847 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.07s 2025-07-12 15:48:50.905856 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.95s 2025-07-12 15:48:50.905870 | orchestrator | Check MariaDB service --------------------------------------------------- 2.90s 2025-07-12 15:48:50.905880 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.55s 2025-07-12 15:48:50.905889 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.51s 2025-07-12 15:48:50.905899 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.47s 2025-07-12 15:48:50.905908 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.38s 2025-07-12 15:48:50.905917 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.29s 2025-07-12 15:48:50.905926 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.26s 2025-07-12 15:48:50.905936 | orchestrator | 2025-07-12 15:48:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:48:53.955609 | orchestrator | 2025-07-12 15:48:53 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:48:53.957527 | orchestrator | 2025-07-12 15:48:53 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:48:53.959672 | orchestrator | 2025-07-12 15:48:53 | INFO  | Task 55c74a58-3930-4b80-b30b-34ad4727a529 is in state STARTED 2025-07-12 15:48:53.959997 | orchestrator | 2025-07-12 15:48:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:48:57.006093 | orchestrator | 2025-07-12 15:48:57 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:48:57.007578 | orchestrator | 2025-07-12 15:48:57 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:48:57.012197 | orchestrator | 2025-07-12 15:48:57 | INFO  | Task 55c74a58-3930-4b80-b30b-34ad4727a529 is in state STARTED 2025-07-12 15:48:57.012269 | orchestrator | 2025-07-12 15:48:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:49:00.054869 | orchestrator | 2025-07-12 15:49:00 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:49:00.056501 | orchestrator | 2025-07-12 15:49:00 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:49:00.058900 | orchestrator | 2025-07-12 15:49:00 | INFO  | Task 55c74a58-3930-4b80-b30b-34ad4727a529 is in state STARTED 2025-07-12 15:49:00.059428 | orchestrator | 2025-07-12 15:49:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:49:03.106318 | orchestrator | 2025-07-12 15:49:03 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:49:03.107381 | orchestrator | 2025-07-12 15:49:03 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:49:03.108427 | orchestrator | 2025-07-12 15:49:03 | INFO  | Task 55c74a58-3930-4b80-b30b-34ad4727a529 is in state STARTED 2025-07-12 15:49:03.108449 | orchestrator | 2025-07-12 15:49:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:49:06.158122 | orchestrator | 2025-07-12 15:49:06 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:49:06.159181 | orchestrator | 2025-07-12 15:49:06 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:49:06.160967 | orchestrator | 2025-07-12 15:49:06 | INFO  | Task 55c74a58-3930-4b80-b30b-34ad4727a529 is in state STARTED 2025-07-12 15:49:06.161006 | orchestrator | 2025-07-12 15:49:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:49:09.204251 | orchestrator | 2025-07-12 15:49:09 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:49:09.207065 | orchestrator | 2025-07-12 15:49:09 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:49:09.208135 | orchestrator | 2025-07-12 15:49:09 | INFO  | Task 55c74a58-3930-4b80-b30b-34ad4727a529 is in state STARTED 2025-07-12 15:49:09.208471 | orchestrator | 2025-07-12 15:49:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:49:12.254251 | orchestrator | 2025-07-12 15:49:12 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:49:12.254349 | orchestrator | 2025-07-12 15:49:12 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:49:12.254361 | orchestrator | 2025-07-12 15:49:12 | INFO  | Task 55c74a58-3930-4b80-b30b-34ad4727a529 is in state STARTED 2025-07-12 15:49:12.254371 | orchestrator | 2025-07-12 15:49:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:49:15.290947 | orchestrator | 2025-07-12 15:49:15 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:49:15.292942 | orchestrator | 2025-07-12 15:49:15 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:49:15.294586 | orchestrator | 2025-07-12 15:49:15 | INFO  | Task 55c74a58-3930-4b80-b30b-34ad4727a529 is in state STARTED 2025-07-12 15:49:15.294680 | orchestrator | 2025-07-12 15:49:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:49:18.338451 | orchestrator | 2025-07-12 15:49:18 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:49:18.339367 | orchestrator | 2025-07-12 15:49:18 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:49:18.340500 | orchestrator | 2025-07-12 15:49:18 | INFO  | Task 55c74a58-3930-4b80-b30b-34ad4727a529 is in state STARTED 2025-07-12 15:49:18.341299 | orchestrator | 2025-07-12 15:49:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:49:21.387713 | orchestrator | 2025-07-12 15:49:21 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:49:21.387887 | orchestrator | 2025-07-12 15:49:21 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:49:21.389651 | orchestrator | 2025-07-12 15:49:21 | INFO  | Task 55c74a58-3930-4b80-b30b-34ad4727a529 is in state STARTED 2025-07-12 15:49:21.389677 | orchestrator | 2025-07-12 15:49:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:49:24.431215 | orchestrator | 2025-07-12 15:49:24 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:49:24.435448 | orchestrator | 2025-07-12 15:49:24 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:49:24.437634 | orchestrator | 2025-07-12 15:49:24 | INFO  | Task 55c74a58-3930-4b80-b30b-34ad4727a529 is in state STARTED 2025-07-12 15:49:24.437664 | orchestrator | 2025-07-12 15:49:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:49:27.488669 | orchestrator | 2025-07-12 15:49:27 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:49:27.491682 | orchestrator | 2025-07-12 15:49:27 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:49:27.493205 | orchestrator | 2025-07-12 15:49:27 | INFO  | Task 55c74a58-3930-4b80-b30b-34ad4727a529 is in state STARTED 2025-07-12 15:49:27.493238 | orchestrator | 2025-07-12 15:49:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:49:30.538192 | orchestrator | 2025-07-12 15:49:30 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:49:30.538966 | orchestrator | 2025-07-12 15:49:30 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:49:30.542782 | orchestrator | 2025-07-12 15:49:30 | INFO  | Task 55c74a58-3930-4b80-b30b-34ad4727a529 is in state STARTED 2025-07-12 15:49:30.542858 | orchestrator | 2025-07-12 15:49:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:49:33.589663 | orchestrator | 2025-07-12 15:49:33 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:49:33.589855 | orchestrator | 2025-07-12 15:49:33 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:49:33.591272 | orchestrator | 2025-07-12 15:49:33 | INFO  | Task 55c74a58-3930-4b80-b30b-34ad4727a529 is in state STARTED 2025-07-12 15:49:33.591335 | orchestrator | 2025-07-12 15:49:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:49:36.634523 | orchestrator | 2025-07-12 15:49:36 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:49:36.635877 | orchestrator | 2025-07-12 15:49:36 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:49:36.637645 | orchestrator | 2025-07-12 15:49:36 | INFO  | Task 55c74a58-3930-4b80-b30b-34ad4727a529 is in state STARTED 2025-07-12 15:49:36.637905 | orchestrator | 2025-07-12 15:49:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:49:39.686362 | orchestrator | 2025-07-12 15:49:39 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:49:39.687884 | orchestrator | 2025-07-12 15:49:39 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:49:39.689292 | orchestrator | 2025-07-12 15:49:39 | INFO  | Task 55c74a58-3930-4b80-b30b-34ad4727a529 is in state STARTED 2025-07-12 15:49:39.689595 | orchestrator | 2025-07-12 15:49:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:49:42.741590 | orchestrator | 2025-07-12 15:49:42 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:49:42.746619 | orchestrator | 2025-07-12 15:49:42 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:49:42.747930 | orchestrator | 2025-07-12 15:49:42 | INFO  | Task 55c74a58-3930-4b80-b30b-34ad4727a529 is in state STARTED 2025-07-12 15:49:42.748334 | orchestrator | 2025-07-12 15:49:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:49:45.795714 | orchestrator | 2025-07-12 15:49:45 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:49:45.799807 | orchestrator | 2025-07-12 15:49:45 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:49:45.803603 | orchestrator | 2025-07-12 15:49:45 | INFO  | Task 55c74a58-3930-4b80-b30b-34ad4727a529 is in state STARTED 2025-07-12 15:49:45.803642 | orchestrator | 2025-07-12 15:49:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:49:48.849003 | orchestrator | 2025-07-12 15:49:48 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:49:48.849959 | orchestrator | 2025-07-12 15:49:48 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:49:48.850944 | orchestrator | 2025-07-12 15:49:48 | INFO  | Task 55c74a58-3930-4b80-b30b-34ad4727a529 is in state STARTED 2025-07-12 15:49:48.850978 | orchestrator | 2025-07-12 15:49:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:49:51.896223 | orchestrator | 2025-07-12 15:49:51 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:49:51.898569 | orchestrator | 2025-07-12 15:49:51 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:49:51.900499 | orchestrator | 2025-07-12 15:49:51 | INFO  | Task 55c74a58-3930-4b80-b30b-34ad4727a529 is in state STARTED 2025-07-12 15:49:51.900531 | orchestrator | 2025-07-12 15:49:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:49:54.941219 | orchestrator | 2025-07-12 15:49:54 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:49:54.943446 | orchestrator | 2025-07-12 15:49:54 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:49:54.945492 | orchestrator | 2025-07-12 15:49:54 | INFO  | Task 55c74a58-3930-4b80-b30b-34ad4727a529 is in state STARTED 2025-07-12 15:49:54.945757 | orchestrator | 2025-07-12 15:49:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:49:57.999283 | orchestrator | 2025-07-12 15:49:57 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:49:58.000147 | orchestrator | 2025-07-12 15:49:57 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state STARTED 2025-07-12 15:49:58.003199 | orchestrator | 2025-07-12 15:49:57 | INFO  | Task 55c74a58-3930-4b80-b30b-34ad4727a529 is in state STARTED 2025-07-12 15:49:58.003270 | orchestrator | 2025-07-12 15:49:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:50:01.058705 | orchestrator | 2025-07-12 15:50:01 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:50:01.062712 | orchestrator | 2025-07-12 15:50:01.062828 | orchestrator | 2025-07-12 15:50:01.063561 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-07-12 15:50:01.063579 | orchestrator | 2025-07-12 15:50:01.063591 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-07-12 15:50:01.063603 | orchestrator | Saturday 12 July 2025 15:47:47 +0000 (0:00:00.529) 0:00:00.530 ********* 2025-07-12 15:50:01.063614 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:50:01.063627 | orchestrator | 2025-07-12 15:50:01.063638 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-07-12 15:50:01.063649 | orchestrator | Saturday 12 July 2025 15:47:48 +0000 (0:00:00.523) 0:00:01.053 ********* 2025-07-12 15:50:01.063661 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:50:01.063672 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:50:01.063683 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:50:01.063694 | orchestrator | 2025-07-12 15:50:01.063705 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-07-12 15:50:01.063716 | orchestrator | Saturday 12 July 2025 15:47:48 +0000 (0:00:00.620) 0:00:01.674 ********* 2025-07-12 15:50:01.063727 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:50:01.063738 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:50:01.063749 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:50:01.063759 | orchestrator | 2025-07-12 15:50:01.063770 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-07-12 15:50:01.063781 | orchestrator | Saturday 12 July 2025 15:47:48 +0000 (0:00:00.267) 0:00:01.942 ********* 2025-07-12 15:50:01.063830 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:50:01.063841 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:50:01.063854 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:50:01.063865 | orchestrator | 2025-07-12 15:50:01.063876 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-07-12 15:50:01.063887 | orchestrator | Saturday 12 July 2025 15:47:49 +0000 (0:00:00.719) 0:00:02.662 ********* 2025-07-12 15:50:01.063898 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:50:01.063908 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:50:01.063919 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:50:01.063929 | orchestrator | 2025-07-12 15:50:01.063940 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-07-12 15:50:01.063967 | orchestrator | Saturday 12 July 2025 15:47:50 +0000 (0:00:00.307) 0:00:02.969 ********* 2025-07-12 15:50:01.063979 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:50:01.063990 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:50:01.064000 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:50:01.064011 | orchestrator | 2025-07-12 15:50:01.064022 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-07-12 15:50:01.064033 | orchestrator | Saturday 12 July 2025 15:47:50 +0000 (0:00:00.306) 0:00:03.275 ********* 2025-07-12 15:50:01.064044 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:50:01.064055 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:50:01.064065 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:50:01.064076 | orchestrator | 2025-07-12 15:50:01.064086 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-07-12 15:50:01.064097 | orchestrator | Saturday 12 July 2025 15:47:50 +0000 (0:00:00.305) 0:00:03.581 ********* 2025-07-12 15:50:01.064108 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:50:01.064120 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:50:01.064131 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:50:01.064144 | orchestrator | 2025-07-12 15:50:01.064157 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-07-12 15:50:01.064170 | orchestrator | Saturday 12 July 2025 15:47:51 +0000 (0:00:00.459) 0:00:04.041 ********* 2025-07-12 15:50:01.064205 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:50:01.064220 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:50:01.064232 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:50:01.064243 | orchestrator | 2025-07-12 15:50:01.064254 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-07-12 15:50:01.064264 | orchestrator | Saturday 12 July 2025 15:47:51 +0000 (0:00:00.280) 0:00:04.321 ********* 2025-07-12 15:50:01.064275 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-12 15:50:01.064286 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 15:50:01.064296 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 15:50:01.064307 | orchestrator | 2025-07-12 15:50:01.064317 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-07-12 15:50:01.064328 | orchestrator | Saturday 12 July 2025 15:47:51 +0000 (0:00:00.605) 0:00:04.927 ********* 2025-07-12 15:50:01.064338 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:50:01.064366 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:50:01.064377 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:50:01.064387 | orchestrator | 2025-07-12 15:50:01.064398 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-07-12 15:50:01.064408 | orchestrator | Saturday 12 July 2025 15:47:52 +0000 (0:00:00.429) 0:00:05.356 ********* 2025-07-12 15:50:01.064419 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-12 15:50:01.064429 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 15:50:01.064440 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 15:50:01.064451 | orchestrator | 2025-07-12 15:50:01.064461 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-07-12 15:50:01.064472 | orchestrator | Saturday 12 July 2025 15:47:54 +0000 (0:00:02.163) 0:00:07.519 ********* 2025-07-12 15:50:01.064483 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-12 15:50:01.064493 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-12 15:50:01.064505 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-12 15:50:01.064523 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:50:01.064541 | orchestrator | 2025-07-12 15:50:01.064560 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-07-12 15:50:01.064683 | orchestrator | Saturday 12 July 2025 15:47:54 +0000 (0:00:00.387) 0:00:07.906 ********* 2025-07-12 15:50:01.064702 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.064717 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.064729 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.064740 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:50:01.064751 | orchestrator | 2025-07-12 15:50:01.064761 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-07-12 15:50:01.064772 | orchestrator | Saturday 12 July 2025 15:47:55 +0000 (0:00:00.779) 0:00:08.686 ********* 2025-07-12 15:50:01.064811 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.064847 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.064859 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.064870 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:50:01.064880 | orchestrator | 2025-07-12 15:50:01.064891 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-07-12 15:50:01.064902 | orchestrator | Saturday 12 July 2025 15:47:55 +0000 (0:00:00.160) 0:00:08.846 ********* 2025-07-12 15:50:01.064915 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '65637d68c52d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-07-12 15:47:53.073548', 'end': '2025-07-12 15:47:53.114489', 'delta': '0:00:00.040941', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['65637d68c52d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-07-12 15:50:01.064930 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'cca4958387a8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-07-12 15:47:53.846019', 'end': '2025-07-12 15:47:53.894469', 'delta': '0:00:00.048450', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cca4958387a8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-07-12 15:50:01.064982 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'b6071a3711d5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-07-12 15:47:54.384731', 'end': '2025-07-12 15:47:54.422743', 'delta': '0:00:00.038012', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b6071a3711d5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-07-12 15:50:01.064995 | orchestrator | 2025-07-12 15:50:01.065006 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-07-12 15:50:01.065017 | orchestrator | Saturday 12 July 2025 15:47:56 +0000 (0:00:00.365) 0:00:09.212 ********* 2025-07-12 15:50:01.065028 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:50:01.065048 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:50:01.065059 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:50:01.065069 | orchestrator | 2025-07-12 15:50:01.065080 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-07-12 15:50:01.065091 | orchestrator | Saturday 12 July 2025 15:47:56 +0000 (0:00:00.422) 0:00:09.634 ********* 2025-07-12 15:50:01.065102 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-07-12 15:50:01.065113 | orchestrator | 2025-07-12 15:50:01.065124 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-07-12 15:50:01.065135 | orchestrator | Saturday 12 July 2025 15:47:58 +0000 (0:00:01.604) 0:00:11.239 ********* 2025-07-12 15:50:01.065145 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:50:01.065156 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:50:01.065167 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:50:01.065177 | orchestrator | 2025-07-12 15:50:01.065188 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-07-12 15:50:01.065199 | orchestrator | Saturday 12 July 2025 15:47:58 +0000 (0:00:00.282) 0:00:11.522 ********* 2025-07-12 15:50:01.065215 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:50:01.065226 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:50:01.065236 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:50:01.065247 | orchestrator | 2025-07-12 15:50:01.065257 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-07-12 15:50:01.065268 | orchestrator | Saturday 12 July 2025 15:47:58 +0000 (0:00:00.387) 0:00:11.909 ********* 2025-07-12 15:50:01.065278 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:50:01.065289 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:50:01.065300 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:50:01.065310 | orchestrator | 2025-07-12 15:50:01.065321 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-07-12 15:50:01.065332 | orchestrator | Saturday 12 July 2025 15:47:59 +0000 (0:00:00.479) 0:00:12.389 ********* 2025-07-12 15:50:01.065343 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:50:01.065353 | orchestrator | 2025-07-12 15:50:01.065364 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-07-12 15:50:01.065375 | orchestrator | Saturday 12 July 2025 15:47:59 +0000 (0:00:00.126) 0:00:12.515 ********* 2025-07-12 15:50:01.065385 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:50:01.065396 | orchestrator | 2025-07-12 15:50:01.065406 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-07-12 15:50:01.065417 | orchestrator | Saturday 12 July 2025 15:47:59 +0000 (0:00:00.222) 0:00:12.738 ********* 2025-07-12 15:50:01.065427 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:50:01.065438 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:50:01.065448 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:50:01.065459 | orchestrator | 2025-07-12 15:50:01.065469 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-07-12 15:50:01.065480 | orchestrator | Saturday 12 July 2025 15:48:00 +0000 (0:00:00.296) 0:00:13.034 ********* 2025-07-12 15:50:01.065491 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:50:01.065501 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:50:01.065512 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:50:01.065523 | orchestrator | 2025-07-12 15:50:01.065533 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-07-12 15:50:01.065544 | orchestrator | Saturday 12 July 2025 15:48:00 +0000 (0:00:00.337) 0:00:13.372 ********* 2025-07-12 15:50:01.065554 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:50:01.065565 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:50:01.065575 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:50:01.065586 | orchestrator | 2025-07-12 15:50:01.065596 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-07-12 15:50:01.065607 | orchestrator | Saturday 12 July 2025 15:48:00 +0000 (0:00:00.490) 0:00:13.862 ********* 2025-07-12 15:50:01.065618 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:50:01.065634 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:50:01.065645 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:50:01.065655 | orchestrator | 2025-07-12 15:50:01.065666 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-07-12 15:50:01.065677 | orchestrator | Saturday 12 July 2025 15:48:01 +0000 (0:00:00.336) 0:00:14.198 ********* 2025-07-12 15:50:01.065687 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:50:01.065698 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:50:01.065709 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:50:01.065719 | orchestrator | 2025-07-12 15:50:01.065730 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-07-12 15:50:01.065741 | orchestrator | Saturday 12 July 2025 15:48:01 +0000 (0:00:00.313) 0:00:14.512 ********* 2025-07-12 15:50:01.065751 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:50:01.065762 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:50:01.065773 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:50:01.065821 | orchestrator | 2025-07-12 15:50:01.065843 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-07-12 15:50:01.065904 | orchestrator | Saturday 12 July 2025 15:48:01 +0000 (0:00:00.310) 0:00:14.822 ********* 2025-07-12 15:50:01.065917 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:50:01.065928 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:50:01.065939 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:50:01.065949 | orchestrator | 2025-07-12 15:50:01.065960 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-07-12 15:50:01.065971 | orchestrator | Saturday 12 July 2025 15:48:02 +0000 (0:00:00.514) 0:00:15.336 ********* 2025-07-12 15:50:01.065983 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0c0189bb--8103--55ae--95fc--ac60d34dc15f-osd--block--0c0189bb--8103--55ae--95fc--ac60d34dc15f', 'dm-uuid-LVM-tf720NRkUyPSvEBWzFdYzrzVAVv12n3Ctx3WNdW8l0E21IRHNT0pJMf31Czyjp3L'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 15:50:01.066003 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2608adc8--8e22--540f--a74d--9f1d5d1ddc4f-osd--block--2608adc8--8e22--540f--a74d--9f1d5d1ddc4f', 'dm-uuid-LVM-TlTe1Avr2uKAcYFGEozdZjlJBbzRj5RtcV3spMZ5fndkYcs4g3hs93vJZjrIHT9b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 15:50:01.066059 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:50:01.066076 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:50:01.066087 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:50:01.066108 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:50:01.066119 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:50:01.066165 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:50:01.066178 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:50:01.066189 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:50:01.066208 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b', 'scsi-SQEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b-part1', 'scsi-SQEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b-part14', 'scsi-SQEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b-part15', 'scsi-SQEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b-part16', 'scsi-SQEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 15:50:01.066231 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--0c0189bb--8103--55ae--95fc--ac60d34dc15f-osd--block--0c0189bb--8103--55ae--95fc--ac60d34dc15f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jyl4Kj-iZOl-sy7q-Pq72-HD7M-gIjU-dg1WiH', 'scsi-0QEMU_QEMU_HARDDISK_c6699afa-886d-4139-8698-8a8fafe98984', 'scsi-SQEMU_QEMU_HARDDISK_c6699afa-886d-4139-8698-8a8fafe98984'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 15:50:01.066275 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ed518422--90c3--5ab9--913f--91d667874e9d-osd--block--ed518422--90c3--5ab9--913f--91d667874e9d', 'dm-uuid-LVM-XVmadN0mqQ2oHtzAhxUE6pN3WTcrFBP0WnjWT8Hxg8AFRWeEheH4oiqNL1GeIsoM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 15:50:01.066289 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2608adc8--8e22--540f--a74d--9f1d5d1ddc4f-osd--block--2608adc8--8e22--540f--a74d--9f1d5d1ddc4f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-s6i02u-ktCZ-MuCo-rpun-X43h-5Be3-TQShRX', 'scsi-0QEMU_QEMU_HARDDISK_4e5b43f9-5557-4a03-9895-8e671249b5b2', 'scsi-SQEMU_QEMU_HARDDISK_4e5b43f9-5557-4a03-9895-8e671249b5b2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 15:50:01.066306 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--66e431f6--efaf--5b66--8dd9--edbf314ce410-osd--block--66e431f6--efaf--5b66--8dd9--edbf314ce410', 'dm-uuid-LVM-X7Q43GJC6NOnI6uN1nufyrfG9fHQSD9jrK39rmFAu4UvCyjKPGT499811uPfawyh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 15:50:01.066317 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0aec1d56-840e-4d62-87fc-8ad42993ed21', 'scsi-SQEMU_QEMU_HARDDISK_0aec1d56-840e-4d62-87fc-8ad42993ed21'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 15:50:01.066336 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:50:01.066348 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-14-52-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 15:50:01.066359 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:50:01.066398 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:50:01.066411 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:50:01.066422 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:50:01.066433 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:50:01.066444 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:50:01.066460 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:50:01.066472 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:50:01.066527 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd', 'scsi-SQEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 15:50:01.066542 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ed518422--90c3--5ab9--913f--91d667874e9d-osd--block--ed518422--90c3--5ab9--913f--91d667874e9d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XhgY2L-dNwu-Wjve-oZCH-eyUb-VpDX-4pdae2', 'scsi-0QEMU_QEMU_HARDDISK_9415964e-ba41-448d-be5c-d5fc92ddea3f', 'scsi-SQEMU_QEMU_HARDDISK_9415964e-ba41-448d-be5c-d5fc92ddea3f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 15:50:01.066559 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--66e431f6--efaf--5b66--8dd9--edbf314ce410-osd--block--66e431f6--efaf--5b66--8dd9--edbf314ce410'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-N4rQjE-Lh8a-mzut-ehOW-vGJw-81If-Fbu8pa', 'scsi-0QEMU_QEMU_HARDDISK_df26c144-7e2c-487c-9e8f-effdfe3555dd', 'scsi-SQEMU_QEMU_HARDDISK_df26c144-7e2c-487c-9e8f-effdfe3555dd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 15:50:01.066571 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80301f58-6d09-4d29-bcb1-b411833d1e96', 'scsi-SQEMU_QEMU_HARDDISK_80301f58-6d09-4d29-bcb1-b411833d1e96'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 15:50:01.066589 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-14-52-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 15:50:01.066601 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:50:01.066612 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--98eaa118--ceae--5fd7--911b--5a5c065fb5e7-osd--block--98eaa118--ceae--5fd7--911b--5a5c065fb5e7', 'dm-uuid-LVM-I64y3JwzPT8m2omvdUM4ThksJnVVo5jdKhE5B1OA4VTYgglcCz6olKyaXoO2aiaq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 15:50:01.066630 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d3106c13--92fd--5dcd--ba4d--74ce9f77b023-osd--block--d3106c13--92fd--5dcd--ba4d--74ce9f77b023', 'dm-uuid-LVM-iQcQMh1cncewEXXEaxf144lrXeIlB3JcF6MDxTVlUyqUBwh1ozHrVMrJKwQhsLk3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sect2025-07-12 15:50:01 | INFO  | Task 5710d072-5a41-43b6-84f7-495e10c7edc2 is in state SUCCESS 2025-07-12 15:50:01.066642 | orchestrator | orsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 15:50:01.066654 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:50:01.066665 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:50:01.066676 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:50:01.066693 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:50:01.066712 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:50:01.066723 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:50:01.066734 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:50:01.066745 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 15:50:01.066773 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e', 'scsi-SQEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e-part1', 'scsi-SQEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e-part14', 'scsi-SQEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e-part15', 'scsi-SQEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e-part16', 'scsi-SQEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 15:50:01.066820 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--98eaa118--ceae--5fd7--911b--5a5c065fb5e7-osd--block--98eaa118--ceae--5fd7--911b--5a5c065fb5e7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ALQMmF-hxLg-dfN1-POEx-XGkM-suB0-m6rHC3', 'scsi-0QEMU_QEMU_HARDDISK_6698acfe-c205-405d-be66-12c19a56960d', 'scsi-SQEMU_QEMU_HARDDISK_6698acfe-c205-405d-be66-12c19a56960d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 15:50:01.066834 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d3106c13--92fd--5dcd--ba4d--74ce9f77b023-osd--block--d3106c13--92fd--5dcd--ba4d--74ce9f77b023'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GQ7H0t-n3DY-Urch-Q632-9o6L-oJBd-RuffH9', 'scsi-0QEMU_QEMU_HARDDISK_2d047699-b504-4740-af1d-648b929835be', 'scsi-SQEMU_QEMU_HARDDISK_2d047699-b504-4740-af1d-648b929835be'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 15:50:01.066846 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2bb8cb1-296e-41d9-9659-79f1ba9bca2a', 'scsi-SQEMU_QEMU_HARDDISK_e2bb8cb1-296e-41d9-9659-79f1ba9bca2a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 15:50:01.066865 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-14-52-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 15:50:01.066877 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:50:01.066887 | orchestrator | 2025-07-12 15:50:01.066898 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-07-12 15:50:01.066909 | orchestrator | Saturday 12 July 2025 15:48:02 +0000 (0:00:00.591) 0:00:15.928 ********* 2025-07-12 15:50:01.066921 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0c0189bb--8103--55ae--95fc--ac60d34dc15f-osd--block--0c0189bb--8103--55ae--95fc--ac60d34dc15f', 'dm-uuid-LVM-tf720NRkUyPSvEBWzFdYzrzVAVv12n3Ctx3WNdW8l0E21IRHNT0pJMf31Czyjp3L'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.066945 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2608adc8--8e22--540f--a74d--9f1d5d1ddc4f-osd--block--2608adc8--8e22--540f--a74d--9f1d5d1ddc4f', 'dm-uuid-LVM-TlTe1Avr2uKAcYFGEozdZjlJBbzRj5RtcV3spMZ5fndkYcs4g3hs93vJZjrIHT9b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.066956 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.066968 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.066979 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.066997 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067009 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067031 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067043 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067054 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ed518422--90c3--5ab9--913f--91d667874e9d-osd--block--ed518422--90c3--5ab9--913f--91d667874e9d', 'dm-uuid-LVM-XVmadN0mqQ2oHtzAhxUE6pN3WTcrFBP0WnjWT8Hxg8AFRWeEheH4oiqNL1GeIsoM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067066 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067084 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--66e431f6--efaf--5b66--8dd9--edbf314ce410-osd--block--66e431f6--efaf--5b66--8dd9--edbf314ce410', 'dm-uuid-LVM-X7Q43GJC6NOnI6uN1nufyrfG9fHQSD9jrK39rmFAu4UvCyjKPGT499811uPfawyh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067103 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b', 'scsi-SQEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b-part1', 'scsi-SQEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b-part14', 'scsi-SQEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b-part15', 'scsi-SQEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b-part16', 'scsi-SQEMU_QEMU_HARDDISK_4ba9f296-83b5-4523-b70f-ede13f56d35b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067126 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067144 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--0c0189bb--8103--55ae--95fc--ac60d34dc15f-osd--block--0c0189bb--8103--55ae--95fc--ac60d34dc15f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jyl4Kj-iZOl-sy7q-Pq72-HD7M-gIjU-dg1WiH', 'scsi-0QEMU_QEMU_HARDDISK_c6699afa-886d-4139-8698-8a8fafe98984', 'scsi-SQEMU_QEMU_HARDDISK_c6699afa-886d-4139-8698-8a8fafe98984'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067157 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2608adc8--8e22--540f--a74d--9f1d5d1ddc4f-osd--block--2608adc8--8e22--540f--a74d--9f1d5d1ddc4f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-s6i02u-ktCZ-MuCo-rpun-X43h-5Be3-TQShRX', 'scsi-0QEMU_QEMU_HARDDISK_4e5b43f9-5557-4a03-9895-8e671249b5b2', 'scsi-SQEMU_QEMU_HARDDISK_4e5b43f9-5557-4a03-9895-8e671249b5b2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067180 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067192 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0aec1d56-840e-4d62-87fc-8ad42993ed21', 'scsi-SQEMU_QEMU_HARDDISK_0aec1d56-840e-4d62-87fc-8ad42993ed21'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067204 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067215 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-14-52-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067233 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067246 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067272 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067284 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067295 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067306 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:50:01.067327 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd', 'scsi-SQEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_934592de-8849-4d55-9151-342b895547cd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067351 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ed518422--90c3--5ab9--913f--91d667874e9d-osd--block--ed518422--90c3--5ab9--913f--91d667874e9d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XhgY2L-dNwu-Wjve-oZCH-eyUb-VpDX-4pdae2', 'scsi-0QEMU_QEMU_HARDDISK_9415964e-ba41-448d-be5c-d5fc92ddea3f', 'scsi-SQEMU_QEMU_HARDDISK_9415964e-ba41-448d-be5c-d5fc92ddea3f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067364 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--66e431f6--efaf--5b66--8dd9--edbf314ce410-osd--block--66e431f6--efaf--5b66--8dd9--edbf314ce410'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-N4rQjE-Lh8a-mzut-ehOW-vGJw-81If-Fbu8pa', 'scsi-0QEMU_QEMU_HARDDISK_df26c144-7e2c-487c-9e8f-effdfe3555dd', 'scsi-SQEMU_QEMU_HARDDISK_df26c144-7e2c-487c-9e8f-effdfe3555dd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067375 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80301f58-6d09-4d29-bcb1-b411833d1e96', 'scsi-SQEMU_QEMU_HARDDISK_80301f58-6d09-4d29-bcb1-b411833d1e96'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067393 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-14-52-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067465 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:50:01.067478 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--98eaa118--ceae--5fd7--911b--5a5c065fb5e7-osd--block--98eaa118--ceae--5fd7--911b--5a5c065fb5e7', 'dm-uuid-LVM-I64y3JwzPT8m2omvdUM4ThksJnVVo5jdKhE5B1OA4VTYgglcCz6olKyaXoO2aiaq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067495 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d3106c13--92fd--5dcd--ba4d--74ce9f77b023-osd--block--d3106c13--92fd--5dcd--ba4d--74ce9f77b023', 'dm-uuid-LVM-iQcQMh1cncewEXXEaxf144lrXeIlB3JcF6MDxTVlUyqUBwh1ozHrVMrJKwQhsLk3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067507 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067518 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067530 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067549 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067568 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067585 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067596 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067607 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067627 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e', 'scsi-SQEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e-part1', 'scsi-SQEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e-part14', 'scsi-SQEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e-part15', 'scsi-SQEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e-part16', 'scsi-SQEMU_QEMU_HARDDISK_e3892144-1c31-4d8e-8a84-28397e34627e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067651 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--98eaa118--ceae--5fd7--911b--5a5c065fb5e7-osd--block--98eaa118--ceae--5fd7--911b--5a5c065fb5e7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ALQMmF-hxLg-dfN1-POEx-XGkM-suB0-m6rHC3', 'scsi-0QEMU_QEMU_HARDDISK_6698acfe-c205-405d-be66-12c19a56960d', 'scsi-SQEMU_QEMU_HARDDISK_6698acfe-c205-405d-be66-12c19a56960d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067663 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d3106c13--92fd--5dcd--ba4d--74ce9f77b023-osd--block--d3106c13--92fd--5dcd--ba4d--74ce9f77b023'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GQ7H0t-n3DY-Urch-Q632-9o6L-oJBd-RuffH9', 'scsi-0QEMU_QEMU_HARDDISK_2d047699-b504-4740-af1d-648b929835be', 'scsi-SQEMU_QEMU_HARDDISK_2d047699-b504-4740-af1d-648b929835be'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067675 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2bb8cb1-296e-41d9-9659-79f1ba9bca2a', 'scsi-SQEMU_QEMU_HARDDISK_e2bb8cb1-296e-41d9-9659-79f1ba9bca2a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067692 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-14-52-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 15:50:01.067710 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:50:01.067721 | orchestrator | 2025-07-12 15:50:01.067732 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-07-12 15:50:01.067743 | orchestrator | Saturday 12 July 2025 15:48:03 +0000 (0:00:00.658) 0:00:16.586 ********* 2025-07-12 15:50:01.067754 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:50:01.067765 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:50:01.067776 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:50:01.067814 | orchestrator | 2025-07-12 15:50:01.067826 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-07-12 15:50:01.067837 | orchestrator | Saturday 12 July 2025 15:48:04 +0000 (0:00:00.690) 0:00:17.277 ********* 2025-07-12 15:50:01.067848 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:50:01.067858 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:50:01.067869 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:50:01.067879 | orchestrator | 2025-07-12 15:50:01.067890 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-07-12 15:50:01.067900 | orchestrator | Saturday 12 July 2025 15:48:04 +0000 (0:00:00.474) 0:00:17.751 ********* 2025-07-12 15:50:01.067911 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:50:01.067921 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:50:01.067931 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:50:01.067942 | orchestrator | 2025-07-12 15:50:01.067952 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-07-12 15:50:01.067963 | orchestrator | Saturday 12 July 2025 15:48:05 +0000 (0:00:00.741) 0:00:18.492 ********* 2025-07-12 15:50:01.067974 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:50:01.067984 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:50:01.067995 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:50:01.068005 | orchestrator | 2025-07-12 15:50:01.068035 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-07-12 15:50:01.068047 | orchestrator | Saturday 12 July 2025 15:48:05 +0000 (0:00:00.301) 0:00:18.794 ********* 2025-07-12 15:50:01.068070 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:50:01.068081 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:50:01.068092 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:50:01.068102 | orchestrator | 2025-07-12 15:50:01.068113 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-07-12 15:50:01.068124 | orchestrator | Saturday 12 July 2025 15:48:06 +0000 (0:00:00.405) 0:00:19.200 ********* 2025-07-12 15:50:01.068134 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:50:01.068145 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:50:01.068156 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:50:01.068167 | orchestrator | 2025-07-12 15:50:01.068177 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-07-12 15:50:01.068188 | orchestrator | Saturday 12 July 2025 15:48:06 +0000 (0:00:00.525) 0:00:19.725 ********* 2025-07-12 15:50:01.068198 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-07-12 15:50:01.068209 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-07-12 15:50:01.068220 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-07-12 15:50:01.068230 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-07-12 15:50:01.068241 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-07-12 15:50:01.068251 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-07-12 15:50:01.068269 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-07-12 15:50:01.068279 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-07-12 15:50:01.068290 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-07-12 15:50:01.068300 | orchestrator | 2025-07-12 15:50:01.068311 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-07-12 15:50:01.068322 | orchestrator | Saturday 12 July 2025 15:48:07 +0000 (0:00:00.895) 0:00:20.621 ********* 2025-07-12 15:50:01.068333 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-12 15:50:01.068343 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-12 15:50:01.068354 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-12 15:50:01.068365 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:50:01.068375 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-07-12 15:50:01.068386 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-07-12 15:50:01.068396 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-07-12 15:50:01.068407 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:50:01.068418 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-07-12 15:50:01.068428 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-07-12 15:50:01.068439 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-07-12 15:50:01.068449 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:50:01.068460 | orchestrator | 2025-07-12 15:50:01.068470 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-07-12 15:50:01.068481 | orchestrator | Saturday 12 July 2025 15:48:08 +0000 (0:00:00.371) 0:00:20.992 ********* 2025-07-12 15:50:01.068492 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:50:01.068503 | orchestrator | 2025-07-12 15:50:01.068514 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-07-12 15:50:01.068525 | orchestrator | Saturday 12 July 2025 15:48:08 +0000 (0:00:00.728) 0:00:21.721 ********* 2025-07-12 15:50:01.068542 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:50:01.068553 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:50:01.068564 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:50:01.068575 | orchestrator | 2025-07-12 15:50:01.068586 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-07-12 15:50:01.068597 | orchestrator | Saturday 12 July 2025 15:48:09 +0000 (0:00:00.330) 0:00:22.051 ********* 2025-07-12 15:50:01.068607 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:50:01.068618 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:50:01.068629 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:50:01.068640 | orchestrator | 2025-07-12 15:50:01.068650 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-07-12 15:50:01.068661 | orchestrator | Saturday 12 July 2025 15:48:09 +0000 (0:00:00.320) 0:00:22.372 ********* 2025-07-12 15:50:01.068672 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:50:01.068683 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:50:01.068693 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:50:01.068704 | orchestrator | 2025-07-12 15:50:01.068715 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-07-12 15:50:01.068725 | orchestrator | Saturday 12 July 2025 15:48:09 +0000 (0:00:00.315) 0:00:22.688 ********* 2025-07-12 15:50:01.068736 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:50:01.068747 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:50:01.068757 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:50:01.068768 | orchestrator | 2025-07-12 15:50:01.068779 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-07-12 15:50:01.068852 | orchestrator | Saturday 12 July 2025 15:48:10 +0000 (0:00:00.586) 0:00:23.274 ********* 2025-07-12 15:50:01.068864 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 15:50:01.068882 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 15:50:01.068893 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 15:50:01.068904 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:50:01.068915 | orchestrator | 2025-07-12 15:50:01.068925 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-07-12 15:50:01.068936 | orchestrator | Saturday 12 July 2025 15:48:10 +0000 (0:00:00.374) 0:00:23.648 ********* 2025-07-12 15:50:01.068947 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 15:50:01.068963 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 15:50:01.068974 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 15:50:01.068985 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:50:01.068995 | orchestrator | 2025-07-12 15:50:01.069006 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-07-12 15:50:01.069017 | orchestrator | Saturday 12 July 2025 15:48:11 +0000 (0:00:00.382) 0:00:24.030 ********* 2025-07-12 15:50:01.069028 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 15:50:01.069038 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 15:50:01.069049 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 15:50:01.069060 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:50:01.069070 | orchestrator | 2025-07-12 15:50:01.069080 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-07-12 15:50:01.069090 | orchestrator | Saturday 12 July 2025 15:48:11 +0000 (0:00:00.362) 0:00:24.392 ********* 2025-07-12 15:50:01.069100 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:50:01.069109 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:50:01.069119 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:50:01.069128 | orchestrator | 2025-07-12 15:50:01.069138 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-07-12 15:50:01.069147 | orchestrator | Saturday 12 July 2025 15:48:11 +0000 (0:00:00.325) 0:00:24.718 ********* 2025-07-12 15:50:01.069157 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-07-12 15:50:01.069167 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-07-12 15:50:01.069176 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-07-12 15:50:01.069185 | orchestrator | 2025-07-12 15:50:01.069195 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-07-12 15:50:01.069204 | orchestrator | Saturday 12 July 2025 15:48:12 +0000 (0:00:00.512) 0:00:25.231 ********* 2025-07-12 15:50:01.069214 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-12 15:50:01.069223 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 15:50:01.069233 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 15:50:01.069242 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-07-12 15:50:01.069252 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-07-12 15:50:01.069261 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-07-12 15:50:01.069271 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-07-12 15:50:01.069280 | orchestrator | 2025-07-12 15:50:01.069290 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-07-12 15:50:01.069299 | orchestrator | Saturday 12 July 2025 15:48:13 +0000 (0:00:01.122) 0:00:26.354 ********* 2025-07-12 15:50:01.069309 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-12 15:50:01.069319 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 15:50:01.069328 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 15:50:01.069337 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-07-12 15:50:01.069356 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-07-12 15:50:01.069366 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-07-12 15:50:01.069381 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-07-12 15:50:01.069392 | orchestrator | 2025-07-12 15:50:01.069401 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-07-12 15:50:01.069411 | orchestrator | Saturday 12 July 2025 15:48:15 +0000 (0:00:02.017) 0:00:28.371 ********* 2025-07-12 15:50:01.069420 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:50:01.069430 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:50:01.069439 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-07-12 15:50:01.069449 | orchestrator | 2025-07-12 15:50:01.069458 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-07-12 15:50:01.069468 | orchestrator | Saturday 12 July 2025 15:48:15 +0000 (0:00:00.388) 0:00:28.760 ********* 2025-07-12 15:50:01.069479 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-12 15:50:01.069490 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-12 15:50:01.069500 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-12 15:50:01.069515 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-12 15:50:01.069525 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-12 15:50:01.069535 | orchestrator | 2025-07-12 15:50:01.069544 | orchestrator | TASK [generate keys] *********************************************************** 2025-07-12 15:50:01.069554 | orchestrator | Saturday 12 July 2025 15:49:03 +0000 (0:00:47.464) 0:01:16.224 ********* 2025-07-12 15:50:01.069563 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 15:50:01.069573 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 15:50:01.069582 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 15:50:01.069592 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 15:50:01.069601 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 15:50:01.069611 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 15:50:01.069620 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-07-12 15:50:01.069629 | orchestrator | 2025-07-12 15:50:01.069639 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-07-12 15:50:01.069648 | orchestrator | Saturday 12 July 2025 15:49:27 +0000 (0:00:24.419) 0:01:40.644 ********* 2025-07-12 15:50:01.069658 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 15:50:01.069673 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 15:50:01.069683 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 15:50:01.069692 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 15:50:01.069702 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 15:50:01.069711 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 15:50:01.069720 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-12 15:50:01.069730 | orchestrator | 2025-07-12 15:50:01.069739 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-07-12 15:50:01.069749 | orchestrator | Saturday 12 July 2025 15:49:40 +0000 (0:00:12.642) 0:01:53.287 ********* 2025-07-12 15:50:01.069758 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 15:50:01.069768 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 15:50:01.069777 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 15:50:01.069922 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 15:50:01.069963 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 15:50:01.069974 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 15:50:01.069983 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 15:50:01.069993 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 15:50:01.070002 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 15:50:01.070012 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 15:50:01.070061 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 15:50:01.070071 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 15:50:01.070081 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 15:50:01.070091 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 15:50:01.070100 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 15:50:01.070110 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 15:50:01.070119 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 15:50:01.070129 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 15:50:01.070139 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-07-12 15:50:01.070149 | orchestrator | 2025-07-12 15:50:01.070159 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:50:01.070169 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-07-12 15:50:01.070180 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-07-12 15:50:01.070196 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-07-12 15:50:01.070207 | orchestrator | 2025-07-12 15:50:01.070216 | orchestrator | 2025-07-12 15:50:01.070226 | orchestrator | 2025-07-12 15:50:01.070235 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:50:01.070245 | orchestrator | Saturday 12 July 2025 15:49:58 +0000 (0:00:18.097) 0:02:11.384 ********* 2025-07-12 15:50:01.070266 | orchestrator | =============================================================================== 2025-07-12 15:50:01.070276 | orchestrator | create openstack pool(s) ----------------------------------------------- 47.46s 2025-07-12 15:50:01.070286 | orchestrator | generate keys ---------------------------------------------------------- 24.42s 2025-07-12 15:50:01.070295 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.10s 2025-07-12 15:50:01.070305 | orchestrator | get keys from monitors ------------------------------------------------- 12.64s 2025-07-12 15:50:01.070315 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.16s 2025-07-12 15:50:01.070325 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.02s 2025-07-12 15:50:01.070334 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.60s 2025-07-12 15:50:01.070344 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.12s 2025-07-12 15:50:01.070353 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.90s 2025-07-12 15:50:01.070361 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.78s 2025-07-12 15:50:01.070369 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.74s 2025-07-12 15:50:01.070376 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.73s 2025-07-12 15:50:01.070384 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.72s 2025-07-12 15:50:01.070392 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.69s 2025-07-12 15:50:01.070400 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.66s 2025-07-12 15:50:01.070408 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.62s 2025-07-12 15:50:01.070416 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.61s 2025-07-12 15:50:01.070423 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.59s 2025-07-12 15:50:01.070431 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.59s 2025-07-12 15:50:01.070439 | orchestrator | ceph-facts : Set osd_pool_default_crush_rule fact ----------------------- 0.53s 2025-07-12 15:50:01.070447 | orchestrator | 2025-07-12 15:50:01 | INFO  | Task 56051c19-980a-4cf9-b090-732a723ca60c is in state STARTED 2025-07-12 15:50:01.070455 | orchestrator | 2025-07-12 15:50:01 | INFO  | Task 55c74a58-3930-4b80-b30b-34ad4727a529 is in state STARTED 2025-07-12 15:50:01.070463 | orchestrator | 2025-07-12 15:50:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:50:04.107149 | orchestrator | 2025-07-12 15:50:04 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:50:04.110666 | orchestrator | 2025-07-12 15:50:04 | INFO  | Task 56051c19-980a-4cf9-b090-732a723ca60c is in state STARTED 2025-07-12 15:50:04.112350 | orchestrator | 2025-07-12 15:50:04 | INFO  | Task 55c74a58-3930-4b80-b30b-34ad4727a529 is in state STARTED 2025-07-12 15:50:04.112389 | orchestrator | 2025-07-12 15:50:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:50:07.157708 | orchestrator | 2025-07-12 15:50:07 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:50:07.159862 | orchestrator | 2025-07-12 15:50:07 | INFO  | Task 56051c19-980a-4cf9-b090-732a723ca60c is in state STARTED 2025-07-12 15:50:07.161654 | orchestrator | 2025-07-12 15:50:07 | INFO  | Task 55c74a58-3930-4b80-b30b-34ad4727a529 is in state STARTED 2025-07-12 15:50:07.161687 | orchestrator | 2025-07-12 15:50:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:50:10.203675 | orchestrator | 2025-07-12 15:50:10 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:50:10.204950 | orchestrator | 2025-07-12 15:50:10 | INFO  | Task 56051c19-980a-4cf9-b090-732a723ca60c is in state STARTED 2025-07-12 15:50:10.205550 | orchestrator | 2025-07-12 15:50:10 | INFO  | Task 55c74a58-3930-4b80-b30b-34ad4727a529 is in state STARTED 2025-07-12 15:50:10.205881 | orchestrator | 2025-07-12 15:50:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:50:13.259016 | orchestrator | 2025-07-12 15:50:13 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:50:13.259821 | orchestrator | 2025-07-12 15:50:13 | INFO  | Task 56051c19-980a-4cf9-b090-732a723ca60c is in state STARTED 2025-07-12 15:50:13.261415 | orchestrator | 2025-07-12 15:50:13 | INFO  | Task 55c74a58-3930-4b80-b30b-34ad4727a529 is in state STARTED 2025-07-12 15:50:13.261470 | orchestrator | 2025-07-12 15:50:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:50:16.311299 | orchestrator | 2025-07-12 15:50:16 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:50:16.312687 | orchestrator | 2025-07-12 15:50:16 | INFO  | Task 56051c19-980a-4cf9-b090-732a723ca60c is in state STARTED 2025-07-12 15:50:16.315119 | orchestrator | 2025-07-12 15:50:16 | INFO  | Task 55c74a58-3930-4b80-b30b-34ad4727a529 is in state STARTED 2025-07-12 15:50:16.315221 | orchestrator | 2025-07-12 15:50:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:50:19.367462 | orchestrator | 2025-07-12 15:50:19 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:50:19.370338 | orchestrator | 2025-07-12 15:50:19 | INFO  | Task 56051c19-980a-4cf9-b090-732a723ca60c is in state STARTED 2025-07-12 15:50:19.373231 | orchestrator | 2025-07-12 15:50:19 | INFO  | Task 55c74a58-3930-4b80-b30b-34ad4727a529 is in state STARTED 2025-07-12 15:50:19.373302 | orchestrator | 2025-07-12 15:50:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:50:22.414441 | orchestrator | 2025-07-12 15:50:22 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:50:22.416305 | orchestrator | 2025-07-12 15:50:22 | INFO  | Task 56051c19-980a-4cf9-b090-732a723ca60c is in state STARTED 2025-07-12 15:50:22.419180 | orchestrator | 2025-07-12 15:50:22 | INFO  | Task 55c74a58-3930-4b80-b30b-34ad4727a529 is in state STARTED 2025-07-12 15:50:22.419210 | orchestrator | 2025-07-12 15:50:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:50:25.467765 | orchestrator | 2025-07-12 15:50:25 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:50:25.467954 | orchestrator | 2025-07-12 15:50:25 | INFO  | Task 56051c19-980a-4cf9-b090-732a723ca60c is in state STARTED 2025-07-12 15:50:25.469634 | orchestrator | 2025-07-12 15:50:25 | INFO  | Task 55c74a58-3930-4b80-b30b-34ad4727a529 is in state STARTED 2025-07-12 15:50:25.469670 | orchestrator | 2025-07-12 15:50:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:50:28.514860 | orchestrator | 2025-07-12 15:50:28 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:50:28.517589 | orchestrator | 2025-07-12 15:50:28 | INFO  | Task 56051c19-980a-4cf9-b090-732a723ca60c is in state STARTED 2025-07-12 15:50:28.519731 | orchestrator | 2025-07-12 15:50:28 | INFO  | Task 55c74a58-3930-4b80-b30b-34ad4727a529 is in state STARTED 2025-07-12 15:50:28.520289 | orchestrator | 2025-07-12 15:50:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:50:31.575435 | orchestrator | 2025-07-12 15:50:31 | INFO  | Task fe5911e0-e4ee-4cd3-910a-b677ef7143ad is in state STARTED 2025-07-12 15:50:31.577348 | orchestrator | 2025-07-12 15:50:31 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:50:31.579112 | orchestrator | 2025-07-12 15:50:31 | INFO  | Task 56051c19-980a-4cf9-b090-732a723ca60c is in state SUCCESS 2025-07-12 15:50:31.582635 | orchestrator | 2025-07-12 15:50:31 | INFO  | Task 55c74a58-3930-4b80-b30b-34ad4727a529 is in state STARTED 2025-07-12 15:50:31.582903 | orchestrator | 2025-07-12 15:50:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:50:34.617867 | orchestrator | 2025-07-12 15:50:34 | INFO  | Task fe5911e0-e4ee-4cd3-910a-b677ef7143ad is in state STARTED 2025-07-12 15:50:34.619437 | orchestrator | 2025-07-12 15:50:34 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:50:34.621816 | orchestrator | 2025-07-12 15:50:34 | INFO  | Task 55c74a58-3930-4b80-b30b-34ad4727a529 is in state STARTED 2025-07-12 15:50:34.621864 | orchestrator | 2025-07-12 15:50:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:50:37.664239 | orchestrator | 2025-07-12 15:50:37 | INFO  | Task fe5911e0-e4ee-4cd3-910a-b677ef7143ad is in state STARTED 2025-07-12 15:50:37.665399 | orchestrator | 2025-07-12 15:50:37 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:50:37.667160 | orchestrator | 2025-07-12 15:50:37 | INFO  | Task 55c74a58-3930-4b80-b30b-34ad4727a529 is in state STARTED 2025-07-12 15:50:37.667185 | orchestrator | 2025-07-12 15:50:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:50:40.709703 | orchestrator | 2025-07-12 15:50:40 | INFO  | Task fe5911e0-e4ee-4cd3-910a-b677ef7143ad is in state STARTED 2025-07-12 15:50:40.710278 | orchestrator | 2025-07-12 15:50:40 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:50:40.710346 | orchestrator | 2025-07-12 15:50:40 | INFO  | Task 55c74a58-3930-4b80-b30b-34ad4727a529 is in state STARTED 2025-07-12 15:50:40.710367 | orchestrator | 2025-07-12 15:50:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:50:43.753272 | orchestrator | 2025-07-12 15:50:43 | INFO  | Task fe5911e0-e4ee-4cd3-910a-b677ef7143ad is in state STARTED 2025-07-12 15:50:43.753691 | orchestrator | 2025-07-12 15:50:43 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:50:43.755846 | orchestrator | 2025-07-12 15:50:43 | INFO  | Task 55c74a58-3930-4b80-b30b-34ad4727a529 is in state SUCCESS 2025-07-12 15:50:43.756281 | orchestrator | 2025-07-12 15:50:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:50:43.757962 | orchestrator | 2025-07-12 15:50:43.758004 | orchestrator | 2025-07-12 15:50:43.758060 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-07-12 15:50:43.758073 | orchestrator | 2025-07-12 15:50:43.758084 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-07-12 15:50:43.758095 | orchestrator | Saturday 12 July 2025 15:50:02 +0000 (0:00:00.157) 0:00:00.157 ********* 2025-07-12 15:50:43.758106 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-07-12 15:50:43.758121 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-07-12 15:50:43.758138 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-07-12 15:50:43.758155 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-07-12 15:50:43.758172 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-07-12 15:50:43.758185 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-07-12 15:50:43.758195 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-07-12 15:50:43.758205 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-07-12 15:50:43.758245 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-07-12 15:50:43.758262 | orchestrator | 2025-07-12 15:50:43.758272 | orchestrator | TASK [Create share directory] ************************************************** 2025-07-12 15:50:43.758282 | orchestrator | Saturday 12 July 2025 15:50:07 +0000 (0:00:04.437) 0:00:04.595 ********* 2025-07-12 15:50:43.758292 | orchestrator | changed: [testbed-manager -> localhost] 2025-07-12 15:50:43.758302 | orchestrator | 2025-07-12 15:50:43.758312 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-07-12 15:50:43.758321 | orchestrator | Saturday 12 July 2025 15:50:08 +0000 (0:00:00.957) 0:00:05.552 ********* 2025-07-12 15:50:43.758331 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-07-12 15:50:43.758340 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-07-12 15:50:43.758350 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-07-12 15:50:43.758359 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-07-12 15:50:43.758369 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-07-12 15:50:43.758378 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-07-12 15:50:43.758387 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-07-12 15:50:43.758397 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-07-12 15:50:43.758406 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-07-12 15:50:43.758415 | orchestrator | 2025-07-12 15:50:43.758425 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-07-12 15:50:43.758434 | orchestrator | Saturday 12 July 2025 15:50:21 +0000 (0:00:13.160) 0:00:18.712 ********* 2025-07-12 15:50:43.758445 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-07-12 15:50:43.758462 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-07-12 15:50:43.758479 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-07-12 15:50:43.758489 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-07-12 15:50:43.758499 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-07-12 15:50:43.758508 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-07-12 15:50:43.758518 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-07-12 15:50:43.758528 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-07-12 15:50:43.758537 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-07-12 15:50:43.758546 | orchestrator | 2025-07-12 15:50:43.758556 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:50:43.758566 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:50:43.758578 | orchestrator | 2025-07-12 15:50:43.758589 | orchestrator | 2025-07-12 15:50:43.758622 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:50:43.758635 | orchestrator | Saturday 12 July 2025 15:50:28 +0000 (0:00:06.807) 0:00:25.519 ********* 2025-07-12 15:50:43.758646 | orchestrator | =============================================================================== 2025-07-12 15:50:43.758657 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.16s 2025-07-12 15:50:43.758668 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.81s 2025-07-12 15:50:43.758679 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.44s 2025-07-12 15:50:43.758698 | orchestrator | Create share directory -------------------------------------------------- 0.96s 2025-07-12 15:50:43.758710 | orchestrator | 2025-07-12 15:50:43.758719 | orchestrator | 2025-07-12 15:50:43.758729 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 15:50:43.758738 | orchestrator | 2025-07-12 15:50:43.758758 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 15:50:43.758768 | orchestrator | Saturday 12 July 2025 15:48:52 +0000 (0:00:00.256) 0:00:00.256 ********* 2025-07-12 15:50:43.758806 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:50:43.758816 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:50:43.758826 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:50:43.758835 | orchestrator | 2025-07-12 15:50:43.758845 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 15:50:43.758854 | orchestrator | Saturday 12 July 2025 15:48:53 +0000 (0:00:00.285) 0:00:00.542 ********* 2025-07-12 15:50:43.758864 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-07-12 15:50:43.758873 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-07-12 15:50:43.758882 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-07-12 15:50:43.758892 | orchestrator | 2025-07-12 15:50:43.758901 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-07-12 15:50:43.758911 | orchestrator | 2025-07-12 15:50:43.758920 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-12 15:50:43.758929 | orchestrator | Saturday 12 July 2025 15:48:53 +0000 (0:00:00.421) 0:00:00.963 ********* 2025-07-12 15:50:43.758939 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:50:43.758948 | orchestrator | 2025-07-12 15:50:43.758957 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-07-12 15:50:43.758967 | orchestrator | Saturday 12 July 2025 15:48:54 +0000 (0:00:00.537) 0:00:01.500 ********* 2025-07-12 15:50:43.758982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 15:50:43.759025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 15:50:43.759053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 15:50:43.759075 | orchestrator | 2025-07-12 15:50:43.759085 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-07-12 15:50:43.759095 | orchestrator | Saturday 12 July 2025 15:48:55 +0000 (0:00:01.129) 0:00:02.630 ********* 2025-07-12 15:50:43.759104 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:50:43.759114 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:50:43.759123 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:50:43.759132 | orchestrator | 2025-07-12 15:50:43.759142 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-12 15:50:43.759151 | orchestrator | Saturday 12 July 2025 15:48:55 +0000 (0:00:00.453) 0:00:03.083 ********* 2025-07-12 15:50:43.759160 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-07-12 15:50:43.759176 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-07-12 15:50:43.759186 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-07-12 15:50:43.759195 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-07-12 15:50:43.759205 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-07-12 15:50:43.759214 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-07-12 15:50:43.759223 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-07-12 15:50:43.759233 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-07-12 15:50:43.759242 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-07-12 15:50:43.759251 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-07-12 15:50:43.759260 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-07-12 15:50:43.759270 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-07-12 15:50:43.759279 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-07-12 15:50:43.759288 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-07-12 15:50:43.759298 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-07-12 15:50:43.759307 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-07-12 15:50:43.759316 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-07-12 15:50:43.759325 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-07-12 15:50:43.759334 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-07-12 15:50:43.759344 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-07-12 15:50:43.759353 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-07-12 15:50:43.759362 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-07-12 15:50:43.759372 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-07-12 15:50:43.759381 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-07-12 15:50:43.759391 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-07-12 15:50:43.759402 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-07-12 15:50:43.759418 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-07-12 15:50:43.759428 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-07-12 15:50:43.759437 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-07-12 15:50:43.759447 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-07-12 15:50:43.759456 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-07-12 15:50:43.759465 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-07-12 15:50:43.759474 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-07-12 15:50:43.759489 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-07-12 15:50:43.759499 | orchestrator | 2025-07-12 15:50:43.759508 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 15:50:43.759518 | orchestrator | Saturday 12 July 2025 15:48:56 +0000 (0:00:00.733) 0:00:03.816 ********* 2025-07-12 15:50:43.759527 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:50:43.759536 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:50:43.759546 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:50:43.759555 | orchestrator | 2025-07-12 15:50:43.759564 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 15:50:43.759574 | orchestrator | Saturday 12 July 2025 15:48:56 +0000 (0:00:00.312) 0:00:04.129 ********* 2025-07-12 15:50:43.759583 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:50:43.759592 | orchestrator | 2025-07-12 15:50:43.759610 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 15:50:43.759627 | orchestrator | Saturday 12 July 2025 15:48:56 +0000 (0:00:00.106) 0:00:04.235 ********* 2025-07-12 15:50:43.759642 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:50:43.759656 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:50:43.759671 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:50:43.759686 | orchestrator | 2025-07-12 15:50:43.759700 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 15:50:43.759715 | orchestrator | Saturday 12 July 2025 15:48:57 +0000 (0:00:00.498) 0:00:04.734 ********* 2025-07-12 15:50:43.759729 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:50:43.759742 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:50:43.759756 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:50:43.759771 | orchestrator | 2025-07-12 15:50:43.759811 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 15:50:43.759828 | orchestrator | Saturday 12 July 2025 15:48:57 +0000 (0:00:00.310) 0:00:05.044 ********* 2025-07-12 15:50:43.759844 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:50:43.759860 | orchestrator | 2025-07-12 15:50:43.759876 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 15:50:43.759892 | orchestrator | Saturday 12 July 2025 15:48:57 +0000 (0:00:00.141) 0:00:05.186 ********* 2025-07-12 15:50:43.759909 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:50:43.759928 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:50:43.759948 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:50:43.759980 | orchestrator | 2025-07-12 15:50:43.759996 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 15:50:43.760007 | orchestrator | Saturday 12 July 2025 15:48:58 +0000 (0:00:00.287) 0:00:05.473 ********* 2025-07-12 15:50:43.760018 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:50:43.760029 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:50:43.760039 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:50:43.760050 | orchestrator | 2025-07-12 15:50:43.760061 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 15:50:43.760071 | orchestrator | Saturday 12 July 2025 15:48:58 +0000 (0:00:00.308) 0:00:05.782 ********* 2025-07-12 15:50:43.760082 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:50:43.760092 | orchestrator | 2025-07-12 15:50:43.760103 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 15:50:43.760114 | orchestrator | Saturday 12 July 2025 15:48:58 +0000 (0:00:00.323) 0:00:06.105 ********* 2025-07-12 15:50:43.760124 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:50:43.760135 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:50:43.760146 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:50:43.760157 | orchestrator | 2025-07-12 15:50:43.760167 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 15:50:43.760178 | orchestrator | Saturday 12 July 2025 15:48:59 +0000 (0:00:00.328) 0:00:06.434 ********* 2025-07-12 15:50:43.760189 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:50:43.760199 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:50:43.760210 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:50:43.760220 | orchestrator | 2025-07-12 15:50:43.760231 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 15:50:43.760242 | orchestrator | Saturday 12 July 2025 15:48:59 +0000 (0:00:00.319) 0:00:06.754 ********* 2025-07-12 15:50:43.760253 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:50:43.760263 | orchestrator | 2025-07-12 15:50:43.760274 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 15:50:43.760284 | orchestrator | Saturday 12 July 2025 15:48:59 +0000 (0:00:00.125) 0:00:06.879 ********* 2025-07-12 15:50:43.760295 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:50:43.760374 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:50:43.760386 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:50:43.760397 | orchestrator | 2025-07-12 15:50:43.760408 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 15:50:43.760426 | orchestrator | Saturday 12 July 2025 15:48:59 +0000 (0:00:00.289) 0:00:07.169 ********* 2025-07-12 15:50:43.760446 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:50:43.760464 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:50:43.760476 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:50:43.760486 | orchestrator | 2025-07-12 15:50:43.760497 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 15:50:43.760510 | orchestrator | Saturday 12 July 2025 15:49:00 +0000 (0:00:00.500) 0:00:07.669 ********* 2025-07-12 15:50:43.760529 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:50:43.760548 | orchestrator | 2025-07-12 15:50:43.760559 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 15:50:43.760570 | orchestrator | Saturday 12 July 2025 15:49:00 +0000 (0:00:00.123) 0:00:07.793 ********* 2025-07-12 15:50:43.760581 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:50:43.760592 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:50:43.760602 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:50:43.760613 | orchestrator | 2025-07-12 15:50:43.760623 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 15:50:43.760633 | orchestrator | Saturday 12 July 2025 15:49:00 +0000 (0:00:00.280) 0:00:08.073 ********* 2025-07-12 15:50:43.760644 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:50:43.760655 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:50:43.760665 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:50:43.760676 | orchestrator | 2025-07-12 15:50:43.760694 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 15:50:43.760713 | orchestrator | Saturday 12 July 2025 15:49:01 +0000 (0:00:00.317) 0:00:08.391 ********* 2025-07-12 15:50:43.760724 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:50:43.760734 | orchestrator | 2025-07-12 15:50:43.760745 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 15:50:43.760756 | orchestrator | Saturday 12 July 2025 15:49:01 +0000 (0:00:00.127) 0:00:08.518 ********* 2025-07-12 15:50:43.760766 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:50:43.760829 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:50:43.760849 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:50:43.760866 | orchestrator | 2025-07-12 15:50:43.760884 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 15:50:43.760896 | orchestrator | Saturday 12 July 2025 15:49:01 +0000 (0:00:00.491) 0:00:09.010 ********* 2025-07-12 15:50:43.760907 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:50:43.760928 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:50:43.760940 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:50:43.760950 | orchestrator | 2025-07-12 15:50:43.760961 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 15:50:43.760971 | orchestrator | Saturday 12 July 2025 15:49:02 +0000 (0:00:00.357) 0:00:09.368 ********* 2025-07-12 15:50:43.760982 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:50:43.760992 | orchestrator | 2025-07-12 15:50:43.761003 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 15:50:43.761014 | orchestrator | Saturday 12 July 2025 15:49:02 +0000 (0:00:00.134) 0:00:09.502 ********* 2025-07-12 15:50:43.761024 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:50:43.761035 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:50:43.761045 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:50:43.761056 | orchestrator | 2025-07-12 15:50:43.761067 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 15:50:43.761077 | orchestrator | Saturday 12 July 2025 15:49:02 +0000 (0:00:00.288) 0:00:09.791 ********* 2025-07-12 15:50:43.761088 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:50:43.761098 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:50:43.761109 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:50:43.761119 | orchestrator | 2025-07-12 15:50:43.761130 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 15:50:43.761140 | orchestrator | Saturday 12 July 2025 15:49:02 +0000 (0:00:00.293) 0:00:10.085 ********* 2025-07-12 15:50:43.761151 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:50:43.761161 | orchestrator | 2025-07-12 15:50:43.761172 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 15:50:43.761183 | orchestrator | Saturday 12 July 2025 15:49:02 +0000 (0:00:00.126) 0:00:10.212 ********* 2025-07-12 15:50:43.761193 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:50:43.761204 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:50:43.761214 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:50:43.761225 | orchestrator | 2025-07-12 15:50:43.761236 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 15:50:43.761246 | orchestrator | Saturday 12 July 2025 15:49:03 +0000 (0:00:00.511) 0:00:10.723 ********* 2025-07-12 15:50:43.761257 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:50:43.761267 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:50:43.761278 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:50:43.761288 | orchestrator | 2025-07-12 15:50:43.761299 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 15:50:43.761318 | orchestrator | Saturday 12 July 2025 15:49:03 +0000 (0:00:00.436) 0:00:11.160 ********* 2025-07-12 15:50:43.761338 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:50:43.761357 | orchestrator | 2025-07-12 15:50:43.761376 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 15:50:43.761395 | orchestrator | Saturday 12 July 2025 15:49:03 +0000 (0:00:00.121) 0:00:11.282 ********* 2025-07-12 15:50:43.761425 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:50:43.761443 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:50:43.761461 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:50:43.761478 | orchestrator | 2025-07-12 15:50:43.761498 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 15:50:43.761515 | orchestrator | Saturday 12 July 2025 15:49:04 +0000 (0:00:00.278) 0:00:11.560 ********* 2025-07-12 15:50:43.761530 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:50:43.761548 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:50:43.761567 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:50:43.761587 | orchestrator | 2025-07-12 15:50:43.761604 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 15:50:43.761623 | orchestrator | Saturday 12 July 2025 15:49:04 +0000 (0:00:00.538) 0:00:12.098 ********* 2025-07-12 15:50:43.761635 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:50:43.761645 | orchestrator | 2025-07-12 15:50:43.761656 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 15:50:43.761666 | orchestrator | Saturday 12 July 2025 15:49:04 +0000 (0:00:00.141) 0:00:12.239 ********* 2025-07-12 15:50:43.761677 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:50:43.761688 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:50:43.761698 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:50:43.761709 | orchestrator | 2025-07-12 15:50:43.761719 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-07-12 15:50:43.761730 | orchestrator | Saturday 12 July 2025 15:49:05 +0000 (0:00:00.307) 0:00:12.547 ********* 2025-07-12 15:50:43.761740 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:50:43.761751 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:50:43.761761 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:50:43.761799 | orchestrator | 2025-07-12 15:50:43.761821 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-07-12 15:50:43.761838 | orchestrator | Saturday 12 July 2025 15:49:06 +0000 (0:00:01.651) 0:00:14.199 ********* 2025-07-12 15:50:43.761849 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-07-12 15:50:43.761860 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-07-12 15:50:43.761878 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-07-12 15:50:43.761889 | orchestrator | 2025-07-12 15:50:43.761900 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-07-12 15:50:43.761910 | orchestrator | Saturday 12 July 2025 15:49:08 +0000 (0:00:02.005) 0:00:16.205 ********* 2025-07-12 15:50:43.761921 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-07-12 15:50:43.761932 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-07-12 15:50:43.761943 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-07-12 15:50:43.761954 | orchestrator | 2025-07-12 15:50:43.761964 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-07-12 15:50:43.761984 | orchestrator | Saturday 12 July 2025 15:49:10 +0000 (0:00:02.057) 0:00:18.263 ********* 2025-07-12 15:50:43.761996 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-07-12 15:50:43.762006 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-07-12 15:50:43.762052 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-07-12 15:50:43.762066 | orchestrator | 2025-07-12 15:50:43.762076 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-07-12 15:50:43.762087 | orchestrator | Saturday 12 July 2025 15:49:12 +0000 (0:00:01.514) 0:00:19.777 ********* 2025-07-12 15:50:43.762098 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:50:43.762125 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:50:43.762136 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:50:43.762146 | orchestrator | 2025-07-12 15:50:43.762157 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-07-12 15:50:43.762168 | orchestrator | Saturday 12 July 2025 15:49:12 +0000 (0:00:00.298) 0:00:20.076 ********* 2025-07-12 15:50:43.762178 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:50:43.762189 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:50:43.762200 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:50:43.762210 | orchestrator | 2025-07-12 15:50:43.762221 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-12 15:50:43.762231 | orchestrator | Saturday 12 July 2025 15:49:13 +0000 (0:00:00.337) 0:00:20.413 ********* 2025-07-12 15:50:43.762242 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:50:43.762253 | orchestrator | 2025-07-12 15:50:43.762263 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-07-12 15:50:43.762274 | orchestrator | Saturday 12 July 2025 15:49:14 +0000 (0:00:00.900) 0:00:21.314 ********* 2025-07-12 15:50:43.762294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 15:50:43.762319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 15:50:43.762346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 15:50:43.762358 | orchestrator | 2025-07-12 15:50:43.762369 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-07-12 15:50:43.762380 | orchestrator | Saturday 12 July 2025 15:49:15 +0000 (0:00:01.528) 0:00:22.842 ********* 2025-07-12 15:50:43.762401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 15:50:43.762420 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:50:43.762445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 15:50:43.762464 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:50:43.762476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 15:50:43.762488 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:50:43.762498 | orchestrator | 2025-07-12 15:50:43.762509 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-07-12 15:50:43.762520 | orchestrator | Saturday 12 July 2025 15:49:16 +0000 (0:00:00.625) 0:00:23.467 ********* 2025-07-12 15:50:43.762545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 15:50:43.762564 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:50:43.762576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 15:50:43.762587 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:50:43.762613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 15:50:43.762636 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:50:43.762653 | orchestrator | 2025-07-12 15:50:43.762673 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-07-12 15:50:43.762691 | orchestrator | Saturday 12 July 2025 15:49:17 +0000 (0:00:01.032) 0:00:24.500 ********* 2025-07-12 15:50:43.762703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 15:50:43.762742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 15:50:43.762806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 15:50:43.762844 | orchestrator | 2025-07-12 15:50:43.762863 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-12 15:50:43.762874 | orchestrator | Saturday 12 July 2025 15:49:18 +0000 (0:00:01.204) 0:00:25.705 ********* 2025-07-12 15:50:43.762885 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:50:43.762895 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:50:43.762906 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:50:43.762916 | orchestrator | 2025-07-12 15:50:43.762927 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-12 15:50:43.762938 | orchestrator | Saturday 12 July 2025 15:49:18 +0000 (0:00:00.287) 0:00:25.992 ********* 2025-07-12 15:50:43.762956 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:50:43.762967 | orchestrator | 2025-07-12 15:50:43.762978 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-07-12 15:50:43.762989 | orchestrator | Saturday 12 July 2025 15:49:19 +0000 (0:00:00.721) 0:00:26.714 ********* 2025-07-12 15:50:43.762999 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:50:43.763010 | orchestrator | 2025-07-12 15:50:43.763021 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-07-12 15:50:43.763031 | orchestrator | Saturday 12 July 2025 15:49:21 +0000 (0:00:02.240) 0:00:28.954 ********* 2025-07-12 15:50:43.763041 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:50:43.763052 | orchestrator | 2025-07-12 15:50:43.763063 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-07-12 15:50:43.763073 | orchestrator | Saturday 12 July 2025 15:49:23 +0000 (0:00:02.152) 0:00:31.106 ********* 2025-07-12 15:50:43.763084 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:50:43.763095 | orchestrator | 2025-07-12 15:50:43.763105 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-07-12 15:50:43.763116 | orchestrator | Saturday 12 July 2025 15:49:39 +0000 (0:00:15.721) 0:00:46.827 ********* 2025-07-12 15:50:43.763126 | orchestrator | 2025-07-12 15:50:43.763137 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-07-12 15:50:43.763148 | orchestrator | Saturday 12 July 2025 15:49:39 +0000 (0:00:00.064) 0:00:46.892 ********* 2025-07-12 15:50:43.763160 | orchestrator | 2025-07-12 15:50:43.763180 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-07-12 15:50:43.763199 | orchestrator | Saturday 12 July 2025 15:49:39 +0000 (0:00:00.067) 0:00:46.959 ********* 2025-07-12 15:50:43.763219 | orchestrator | 2025-07-12 15:50:43.763235 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-07-12 15:50:43.763246 | orchestrator | Saturday 12 July 2025 15:49:39 +0000 (0:00:00.068) 0:00:47.028 ********* 2025-07-12 15:50:43.763257 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:50:43.763267 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:50:43.763278 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:50:43.763288 | orchestrator | 2025-07-12 15:50:43.763299 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:50:43.763310 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-07-12 15:50:43.763322 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-07-12 15:50:43.763332 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-07-12 15:50:43.763343 | orchestrator | 2025-07-12 15:50:43.763353 | orchestrator | 2025-07-12 15:50:43.763364 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:50:43.763374 | orchestrator | Saturday 12 July 2025 15:50:40 +0000 (0:01:00.805) 0:01:47.834 ********* 2025-07-12 15:50:43.763396 | orchestrator | =============================================================================== 2025-07-12 15:50:43.763406 | orchestrator | horizon : Restart horizon container ------------------------------------ 60.81s 2025-07-12 15:50:43.763417 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.72s 2025-07-12 15:50:43.763428 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.24s 2025-07-12 15:50:43.763438 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.15s 2025-07-12 15:50:43.763449 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.06s 2025-07-12 15:50:43.763459 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.01s 2025-07-12 15:50:43.763469 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.65s 2025-07-12 15:50:43.763480 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.53s 2025-07-12 15:50:43.763490 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.51s 2025-07-12 15:50:43.763501 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.20s 2025-07-12 15:50:43.763511 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.13s 2025-07-12 15:50:43.763522 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.03s 2025-07-12 15:50:43.763532 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.90s 2025-07-12 15:50:43.763543 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.73s 2025-07-12 15:50:43.763553 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.72s 2025-07-12 15:50:43.763569 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.63s 2025-07-12 15:50:43.763580 | orchestrator | horizon : Update policy file name --------------------------------------- 0.54s 2025-07-12 15:50:43.763590 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.54s 2025-07-12 15:50:43.763601 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.51s 2025-07-12 15:50:43.763612 | orchestrator | horizon : Update policy file name --------------------------------------- 0.50s 2025-07-12 15:50:46.817445 | orchestrator | 2025-07-12 15:50:46 | INFO  | Task fe5911e0-e4ee-4cd3-910a-b677ef7143ad is in state STARTED 2025-07-12 15:50:46.819100 | orchestrator | 2025-07-12 15:50:46 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:50:46.819128 | orchestrator | 2025-07-12 15:50:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:50:49.891700 | orchestrator | 2025-07-12 15:50:49 | INFO  | Task fe5911e0-e4ee-4cd3-910a-b677ef7143ad is in state STARTED 2025-07-12 15:50:49.891848 | orchestrator | 2025-07-12 15:50:49 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:50:49.891868 | orchestrator | 2025-07-12 15:50:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:50:52.934331 | orchestrator | 2025-07-12 15:50:52 | INFO  | Task fe5911e0-e4ee-4cd3-910a-b677ef7143ad is in state STARTED 2025-07-12 15:50:52.934431 | orchestrator | 2025-07-12 15:50:52 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:50:52.934447 | orchestrator | 2025-07-12 15:50:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:50:55.986271 | orchestrator | 2025-07-12 15:50:55 | INFO  | Task fe5911e0-e4ee-4cd3-910a-b677ef7143ad is in state STARTED 2025-07-12 15:50:55.988721 | orchestrator | 2025-07-12 15:50:55 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:50:55.988767 | orchestrator | 2025-07-12 15:50:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:50:59.052050 | orchestrator | 2025-07-12 15:50:59 | INFO  | Task fe5911e0-e4ee-4cd3-910a-b677ef7143ad is in state STARTED 2025-07-12 15:50:59.053619 | orchestrator | 2025-07-12 15:50:59 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:50:59.053682 | orchestrator | 2025-07-12 15:50:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:51:02.099660 | orchestrator | 2025-07-12 15:51:02 | INFO  | Task fe5911e0-e4ee-4cd3-910a-b677ef7143ad is in state STARTED 2025-07-12 15:51:02.101897 | orchestrator | 2025-07-12 15:51:02 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:51:02.101977 | orchestrator | 2025-07-12 15:51:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:51:05.143363 | orchestrator | 2025-07-12 15:51:05 | INFO  | Task fe5911e0-e4ee-4cd3-910a-b677ef7143ad is in state STARTED 2025-07-12 15:51:05.145189 | orchestrator | 2025-07-12 15:51:05 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:51:05.145222 | orchestrator | 2025-07-12 15:51:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:51:08.188091 | orchestrator | 2025-07-12 15:51:08 | INFO  | Task fe5911e0-e4ee-4cd3-910a-b677ef7143ad is in state STARTED 2025-07-12 15:51:08.189601 | orchestrator | 2025-07-12 15:51:08 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:51:08.189635 | orchestrator | 2025-07-12 15:51:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:51:11.229278 | orchestrator | 2025-07-12 15:51:11 | INFO  | Task fe5911e0-e4ee-4cd3-910a-b677ef7143ad is in state STARTED 2025-07-12 15:51:11.231620 | orchestrator | 2025-07-12 15:51:11 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:51:11.231818 | orchestrator | 2025-07-12 15:51:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:51:14.275134 | orchestrator | 2025-07-12 15:51:14 | INFO  | Task fe5911e0-e4ee-4cd3-910a-b677ef7143ad is in state STARTED 2025-07-12 15:51:14.276915 | orchestrator | 2025-07-12 15:51:14 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:51:14.277037 | orchestrator | 2025-07-12 15:51:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:51:17.318949 | orchestrator | 2025-07-12 15:51:17 | INFO  | Task fe5911e0-e4ee-4cd3-910a-b677ef7143ad is in state STARTED 2025-07-12 15:51:17.320445 | orchestrator | 2025-07-12 15:51:17 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:51:17.320476 | orchestrator | 2025-07-12 15:51:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:51:20.363454 | orchestrator | 2025-07-12 15:51:20 | INFO  | Task fe5911e0-e4ee-4cd3-910a-b677ef7143ad is in state STARTED 2025-07-12 15:51:20.365332 | orchestrator | 2025-07-12 15:51:20 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:51:20.365381 | orchestrator | 2025-07-12 15:51:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:51:23.405247 | orchestrator | 2025-07-12 15:51:23 | INFO  | Task fe5911e0-e4ee-4cd3-910a-b677ef7143ad is in state STARTED 2025-07-12 15:51:23.406608 | orchestrator | 2025-07-12 15:51:23 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:51:23.406644 | orchestrator | 2025-07-12 15:51:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:51:26.461147 | orchestrator | 2025-07-12 15:51:26 | INFO  | Task fe5911e0-e4ee-4cd3-910a-b677ef7143ad is in state STARTED 2025-07-12 15:51:26.466380 | orchestrator | 2025-07-12 15:51:26 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:51:26.466477 | orchestrator | 2025-07-12 15:51:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:51:29.528271 | orchestrator | 2025-07-12 15:51:29 | INFO  | Task fe5911e0-e4ee-4cd3-910a-b677ef7143ad is in state SUCCESS 2025-07-12 15:51:29.529967 | orchestrator | 2025-07-12 15:51:29 | INFO  | Task f25a2832-4daa-4e13-8873-06b2417f9db4 is in state STARTED 2025-07-12 15:51:29.531331 | orchestrator | 2025-07-12 15:51:29 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:51:29.532833 | orchestrator | 2025-07-12 15:51:29 | INFO  | Task 9ee5006b-a226-476a-8cf1-2b13a136ebf5 is in state STARTED 2025-07-12 15:51:29.534160 | orchestrator | 2025-07-12 15:51:29 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:51:29.534188 | orchestrator | 2025-07-12 15:51:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:51:32.568045 | orchestrator | 2025-07-12 15:51:32 | INFO  | Task f25a2832-4daa-4e13-8873-06b2417f9db4 is in state STARTED 2025-07-12 15:51:32.569075 | orchestrator | 2025-07-12 15:51:32 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:51:32.569194 | orchestrator | 2025-07-12 15:51:32 | INFO  | Task 9ee5006b-a226-476a-8cf1-2b13a136ebf5 is in state SUCCESS 2025-07-12 15:51:32.570159 | orchestrator | 2025-07-12 15:51:32 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:51:32.570295 | orchestrator | 2025-07-12 15:51:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:51:35.608185 | orchestrator | 2025-07-12 15:51:35 | INFO  | Task f25a2832-4daa-4e13-8873-06b2417f9db4 is in state STARTED 2025-07-12 15:51:35.608578 | orchestrator | 2025-07-12 15:51:35 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:51:35.609301 | orchestrator | 2025-07-12 15:51:35 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:51:35.610860 | orchestrator | 2025-07-12 15:51:35 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:51:35.611548 | orchestrator | 2025-07-12 15:51:35 | INFO  | Task 23db63a9-8adf-419f-adb5-d181291b702c is in state STARTED 2025-07-12 15:51:35.611720 | orchestrator | 2025-07-12 15:51:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:51:38.636873 | orchestrator | 2025-07-12 15:51:38 | INFO  | Task f25a2832-4daa-4e13-8873-06b2417f9db4 is in state STARTED 2025-07-12 15:51:38.639369 | orchestrator | 2025-07-12 15:51:38 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state STARTED 2025-07-12 15:51:38.641002 | orchestrator | 2025-07-12 15:51:38 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:51:38.642731 | orchestrator | 2025-07-12 15:51:38 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:51:38.644117 | orchestrator | 2025-07-12 15:51:38 | INFO  | Task 23db63a9-8adf-419f-adb5-d181291b702c is in state STARTED 2025-07-12 15:51:38.644251 | orchestrator | 2025-07-12 15:51:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:51:41.674276 | orchestrator | 2025-07-12 15:51:41 | INFO  | Task f25a2832-4daa-4e13-8873-06b2417f9db4 is in state STARTED 2025-07-12 15:51:41.675881 | orchestrator | 2025-07-12 15:51:41.675918 | orchestrator | 2025-07-12 15:51:41.675931 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-07-12 15:51:41.675943 | orchestrator | 2025-07-12 15:51:41.675954 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-07-12 15:51:41.675965 | orchestrator | Saturday 12 July 2025 15:50:32 +0000 (0:00:00.240) 0:00:00.240 ********* 2025-07-12 15:51:41.675991 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-07-12 15:51:41.676023 | orchestrator | 2025-07-12 15:51:41.676035 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-07-12 15:51:41.676046 | orchestrator | Saturday 12 July 2025 15:50:32 +0000 (0:00:00.222) 0:00:00.463 ********* 2025-07-12 15:51:41.676057 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-07-12 15:51:41.676068 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-07-12 15:51:41.676079 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-07-12 15:51:41.676090 | orchestrator | 2025-07-12 15:51:41.676101 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-07-12 15:51:41.676112 | orchestrator | Saturday 12 July 2025 15:50:34 +0000 (0:00:01.252) 0:00:01.715 ********* 2025-07-12 15:51:41.676122 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-07-12 15:51:41.676133 | orchestrator | 2025-07-12 15:51:41.676144 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-07-12 15:51:41.676155 | orchestrator | Saturday 12 July 2025 15:50:35 +0000 (0:00:01.104) 0:00:02.819 ********* 2025-07-12 15:51:41.676165 | orchestrator | changed: [testbed-manager] 2025-07-12 15:51:41.676176 | orchestrator | 2025-07-12 15:51:41.676187 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-07-12 15:51:41.676197 | orchestrator | Saturday 12 July 2025 15:50:36 +0000 (0:00:00.995) 0:00:03.815 ********* 2025-07-12 15:51:41.676209 | orchestrator | changed: [testbed-manager] 2025-07-12 15:51:41.676219 | orchestrator | 2025-07-12 15:51:41.676230 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-07-12 15:51:41.676241 | orchestrator | Saturday 12 July 2025 15:50:37 +0000 (0:00:00.868) 0:00:04.684 ********* 2025-07-12 15:51:41.676251 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-07-12 15:51:41.676262 | orchestrator | ok: [testbed-manager] 2025-07-12 15:51:41.676273 | orchestrator | 2025-07-12 15:51:41.676283 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-07-12 15:51:41.676816 | orchestrator | Saturday 12 July 2025 15:51:16 +0000 (0:00:39.974) 0:00:44.658 ********* 2025-07-12 15:51:41.676832 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-07-12 15:51:41.676845 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-07-12 15:51:41.676996 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-07-12 15:51:41.677062 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-07-12 15:51:41.677074 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-07-12 15:51:41.677085 | orchestrator | 2025-07-12 15:51:41.677096 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-07-12 15:51:41.677107 | orchestrator | Saturday 12 July 2025 15:51:20 +0000 (0:00:03.988) 0:00:48.647 ********* 2025-07-12 15:51:41.677118 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-07-12 15:51:41.677129 | orchestrator | 2025-07-12 15:51:41.677140 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-07-12 15:51:41.677150 | orchestrator | Saturday 12 July 2025 15:51:21 +0000 (0:00:00.436) 0:00:49.083 ********* 2025-07-12 15:51:41.677161 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:51:41.677172 | orchestrator | 2025-07-12 15:51:41.677182 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-07-12 15:51:41.677193 | orchestrator | Saturday 12 July 2025 15:51:21 +0000 (0:00:00.135) 0:00:49.219 ********* 2025-07-12 15:51:41.677203 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:51:41.677214 | orchestrator | 2025-07-12 15:51:41.677225 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-07-12 15:51:41.677235 | orchestrator | Saturday 12 July 2025 15:51:21 +0000 (0:00:00.284) 0:00:49.503 ********* 2025-07-12 15:51:41.677246 | orchestrator | changed: [testbed-manager] 2025-07-12 15:51:41.677256 | orchestrator | 2025-07-12 15:51:41.677267 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-07-12 15:51:41.677291 | orchestrator | Saturday 12 July 2025 15:51:23 +0000 (0:00:01.648) 0:00:51.152 ********* 2025-07-12 15:51:41.677302 | orchestrator | changed: [testbed-manager] 2025-07-12 15:51:41.677312 | orchestrator | 2025-07-12 15:51:41.677323 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-07-12 15:51:41.677333 | orchestrator | Saturday 12 July 2025 15:51:24 +0000 (0:00:00.691) 0:00:51.843 ********* 2025-07-12 15:51:41.677344 | orchestrator | changed: [testbed-manager] 2025-07-12 15:51:41.677355 | orchestrator | 2025-07-12 15:51:41.677365 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-07-12 15:51:41.677376 | orchestrator | Saturday 12 July 2025 15:51:24 +0000 (0:00:00.648) 0:00:52.492 ********* 2025-07-12 15:51:41.677386 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-07-12 15:51:41.677397 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-07-12 15:51:41.677408 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-07-12 15:51:41.677419 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-07-12 15:51:41.677429 | orchestrator | 2025-07-12 15:51:41.677440 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:51:41.677451 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 15:51:41.677463 | orchestrator | 2025-07-12 15:51:41.677473 | orchestrator | 2025-07-12 15:51:41.677525 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:51:41.677538 | orchestrator | Saturday 12 July 2025 15:51:26 +0000 (0:00:01.400) 0:00:53.892 ********* 2025-07-12 15:51:41.677549 | orchestrator | =============================================================================== 2025-07-12 15:51:41.677560 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 39.97s 2025-07-12 15:51:41.677571 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.99s 2025-07-12 15:51:41.677589 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.65s 2025-07-12 15:51:41.677600 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.40s 2025-07-12 15:51:41.677611 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.25s 2025-07-12 15:51:41.677621 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.10s 2025-07-12 15:51:41.677632 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.00s 2025-07-12 15:51:41.677643 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.87s 2025-07-12 15:51:41.677653 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.69s 2025-07-12 15:51:41.677664 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.65s 2025-07-12 15:51:41.677675 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.44s 2025-07-12 15:51:41.677685 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.28s 2025-07-12 15:51:41.677698 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.22s 2025-07-12 15:51:41.677710 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2025-07-12 15:51:41.677722 | orchestrator | 2025-07-12 15:51:41.677734 | orchestrator | 2025-07-12 15:51:41.677746 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 15:51:41.677781 | orchestrator | 2025-07-12 15:51:41.677794 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 15:51:41.677806 | orchestrator | Saturday 12 July 2025 15:51:30 +0000 (0:00:00.189) 0:00:00.189 ********* 2025-07-12 15:51:41.677818 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:51:41.677830 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:51:41.677842 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:51:41.677854 | orchestrator | 2025-07-12 15:51:41.677865 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 15:51:41.677884 | orchestrator | Saturday 12 July 2025 15:51:30 +0000 (0:00:00.316) 0:00:00.505 ********* 2025-07-12 15:51:41.677897 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-07-12 15:51:41.677909 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-07-12 15:51:41.677921 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-07-12 15:51:41.677934 | orchestrator | 2025-07-12 15:51:41.677946 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-07-12 15:51:41.677958 | orchestrator | 2025-07-12 15:51:41.677970 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-07-12 15:51:41.677982 | orchestrator | Saturday 12 July 2025 15:51:31 +0000 (0:00:00.625) 0:00:01.130 ********* 2025-07-12 15:51:41.677995 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:51:41.678007 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:51:41.678070 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:51:41.678085 | orchestrator | 2025-07-12 15:51:41.678095 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:51:41.678107 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:51:41.678118 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:51:41.678129 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:51:41.678140 | orchestrator | 2025-07-12 15:51:41.678150 | orchestrator | 2025-07-12 15:51:41.678161 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:51:41.678172 | orchestrator | Saturday 12 July 2025 15:51:32 +0000 (0:00:00.687) 0:00:01.818 ********* 2025-07-12 15:51:41.678182 | orchestrator | =============================================================================== 2025-07-12 15:51:41.678193 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.69s 2025-07-12 15:51:41.678204 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2025-07-12 15:51:41.678214 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2025-07-12 15:51:41.678225 | orchestrator | 2025-07-12 15:51:41.678235 | orchestrator | 2025-07-12 15:51:41.678246 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 15:51:41.678257 | orchestrator | 2025-07-12 15:51:41.678268 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 15:51:41.678278 | orchestrator | Saturday 12 July 2025 15:48:52 +0000 (0:00:00.253) 0:00:00.254 ********* 2025-07-12 15:51:41.678289 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:51:41.678300 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:51:41.678311 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:51:41.678321 | orchestrator | 2025-07-12 15:51:41.678332 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 15:51:41.678343 | orchestrator | Saturday 12 July 2025 15:48:53 +0000 (0:00:00.301) 0:00:00.555 ********* 2025-07-12 15:51:41.678353 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-07-12 15:51:41.678365 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-07-12 15:51:41.678376 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-07-12 15:51:41.678387 | orchestrator | 2025-07-12 15:51:41.678397 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-07-12 15:51:41.678408 | orchestrator | 2025-07-12 15:51:41.678456 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-12 15:51:41.678469 | orchestrator | Saturday 12 July 2025 15:48:53 +0000 (0:00:00.424) 0:00:00.980 ********* 2025-07-12 15:51:41.678481 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:51:41.678491 | orchestrator | 2025-07-12 15:51:41.678502 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-07-12 15:51:41.678526 | orchestrator | Saturday 12 July 2025 15:48:54 +0000 (0:00:00.511) 0:00:01.491 ********* 2025-07-12 15:51:41.678543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 15:51:41.678561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 15:51:41.678574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 15:51:41.678619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 15:51:41.678638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 15:51:41.678658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 15:51:41.678669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 15:51:41.678681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 15:51:41.678692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 15:51:41.678704 | orchestrator | 2025-07-12 15:51:41.678715 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-07-12 15:51:41.678826 | orchestrator | Saturday 12 July 2025 15:48:56 +0000 (0:00:01.883) 0:00:03.375 ********* 2025-07-12 15:51:41.678838 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-07-12 15:51:41.678849 | orchestrator | 2025-07-12 15:51:41.678860 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-07-12 15:51:41.678871 | orchestrator | Saturday 12 July 2025 15:48:56 +0000 (0:00:00.824) 0:00:04.199 ********* 2025-07-12 15:51:41.678881 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:51:41.678892 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:51:41.678903 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:51:41.678913 | orchestrator | 2025-07-12 15:51:41.678924 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-07-12 15:51:41.678942 | orchestrator | Saturday 12 July 2025 15:48:57 +0000 (0:00:00.482) 0:00:04.682 ********* 2025-07-12 15:51:41.678953 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 15:51:41.678964 | orchestrator | 2025-07-12 15:51:41.678974 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-12 15:51:41.679017 | orchestrator | Saturday 12 July 2025 15:48:58 +0000 (0:00:00.690) 0:00:05.372 ********* 2025-07-12 15:51:41.679030 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:51:41.679041 | orchestrator | 2025-07-12 15:51:41.679051 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-07-12 15:51:41.679062 | orchestrator | Saturday 12 July 2025 15:48:58 +0000 (0:00:00.529) 0:00:05.901 ********* 2025-07-12 15:51:41.679080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 15:51:41.679093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 15:51:41.679106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 15:51:41.679126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 15:51:41.679144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 15:51:41.679156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 15:51:41.679198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 15:51:41.679211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 15:51:41.679222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 15:51:41.679234 | orchestrator | 2025-07-12 15:51:41.679244 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-07-12 15:51:41.679263 | orchestrator | Saturday 12 July 2025 15:49:02 +0000 (0:00:03.488) 0:00:09.390 ********* 2025-07-12 15:51:41.679284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 15:51:41.679306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 15:51:41.679318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 15:51:41.679330 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:51:41.679342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 15:51:41.679354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 15:51:41.679374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 15:51:41.679385 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:51:41.679445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 15:51:41.679460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 15:51:41.679473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 15:51:41.679485 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:51:41.679497 | orchestrator | 2025-07-12 15:51:41.679510 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-07-12 15:51:41.679599 | orchestrator | Saturday 12 July 2025 15:49:02 +0000 (0:00:00.543) 0:00:09.934 ********* 2025-07-12 15:51:41.679616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 15:51:41.679639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 15:51:41.679660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fe2025-07-12 15:51:41 | INFO  | Task cf178d02-672a-41dd-b1f8-5b0d0c3b0054 is in state SUCCESS 2025-07-12 15:51:41.679681 | orchestrator | rnet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 15:51:41.679694 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:51:41.679708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 15:51:41.679722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 15:51:41.679735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 15:51:41.679753 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:51:41.679785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 15:51:41.679813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 15:51:41.679825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 15:51:41.679837 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:51:41.679848 | orchestrator | 2025-07-12 15:51:41.679859 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-07-12 15:51:41.679870 | orchestrator | Saturday 12 July 2025 15:49:03 +0000 (0:00:00.732) 0:00:10.667 ********* 2025-07-12 15:51:41.679882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 15:51:41.679900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 15:51:41.679920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 15:51:41.679937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 15:51:41.679949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 15:51:41.679961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 15:51:41.679978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 15:51:41.679990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 15:51:41.680001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 15:51:41.680012 | orchestrator | 2025-07-12 15:51:41.680023 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-07-12 15:51:41.680040 | orchestrator | Saturday 12 July 2025 15:49:06 +0000 (0:00:03.621) 0:00:14.288 ********* 2025-07-12 15:51:41.680057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 15:51:41.680069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 15:51:41.680081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 15:51:41.680099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 15:51:41.680117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 15:51:41.680134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 15:51:41.680146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 15:51:41.680157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 15:51:41.680175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 15:51:41.680186 | orchestrator | 2025-07-12 15:51:41.680197 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-07-12 15:51:41.680208 | orchestrator | Saturday 12 July 2025 15:49:11 +0000 (0:00:04.910) 0:00:19.199 ********* 2025-07-12 15:51:41.680219 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:51:41.680230 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:51:41.680240 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:51:41.680251 | orchestrator | 2025-07-12 15:51:41.680262 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-07-12 15:51:41.680272 | orchestrator | Saturday 12 July 2025 15:49:13 +0000 (0:00:01.451) 0:00:20.650 ********* 2025-07-12 15:51:41.680283 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:51:41.680293 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:51:41.680304 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:51:41.680315 | orchestrator | 2025-07-12 15:51:41.680325 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-07-12 15:51:41.680336 | orchestrator | Saturday 12 July 2025 15:49:13 +0000 (0:00:00.594) 0:00:21.245 ********* 2025-07-12 15:51:41.680347 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:51:41.680357 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:51:41.680368 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:51:41.680379 | orchestrator | 2025-07-12 15:51:41.680390 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-07-12 15:51:41.680400 | orchestrator | Saturday 12 July 2025 15:49:14 +0000 (0:00:00.565) 0:00:21.811 ********* 2025-07-12 15:51:41.680411 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:51:41.680422 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:51:41.680432 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:51:41.680443 | orchestrator | 2025-07-12 15:51:41.680454 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-07-12 15:51:41.680470 | orchestrator | Saturday 12 July 2025 15:49:14 +0000 (0:00:00.324) 0:00:22.135 ********* 2025-07-12 15:51:41.680486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 15:51:41.680509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 15:51:41.680522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 15:51:41.680534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 15:51:41.680552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 15:51:41.680569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 15:51:41.680587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 15:51:41.680598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 15:51:41.680610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 15:51:41.680621 | orchestrator | 2025-07-12 15:51:41.680632 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-12 15:51:41.680643 | orchestrator | Saturday 12 July 2025 15:49:17 +0000 (0:00:02.210) 0:00:24.346 ********* 2025-07-12 15:51:41.680653 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:51:41.680664 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:51:41.680675 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:51:41.680686 | orchestrator | 2025-07-12 15:51:41.680696 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-07-12 15:51:41.680707 | orchestrator | Saturday 12 July 2025 15:49:17 +0000 (0:00:00.309) 0:00:24.655 ********* 2025-07-12 15:51:41.680717 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-07-12 15:51:41.680728 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-07-12 15:51:41.680739 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-07-12 15:51:41.680750 | orchestrator | 2025-07-12 15:51:41.680787 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-07-12 15:51:41.680799 | orchestrator | Saturday 12 July 2025 15:49:19 +0000 (0:00:02.203) 0:00:26.859 ********* 2025-07-12 15:51:41.680810 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 15:51:41.680821 | orchestrator | 2025-07-12 15:51:41.680831 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-07-12 15:51:41.680842 | orchestrator | Saturday 12 July 2025 15:49:20 +0000 (0:00:00.888) 0:00:27.748 ********* 2025-07-12 15:51:41.680853 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:51:41.680864 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:51:41.680874 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:51:41.680885 | orchestrator | 2025-07-12 15:51:41.680902 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-07-12 15:51:41.680919 | orchestrator | Saturday 12 July 2025 15:49:20 +0000 (0:00:00.518) 0:00:28.267 ********* 2025-07-12 15:51:41.680930 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 15:51:41.680941 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-07-12 15:51:41.680952 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-07-12 15:51:41.680963 | orchestrator | 2025-07-12 15:51:41.680973 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-07-12 15:51:41.680989 | orchestrator | Saturday 12 July 2025 15:49:21 +0000 (0:00:00.960) 0:00:29.227 ********* 2025-07-12 15:51:41.681000 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:51:41.681011 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:51:41.681021 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:51:41.681032 | orchestrator | 2025-07-12 15:51:41.681043 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-07-12 15:51:41.681053 | orchestrator | Saturday 12 July 2025 15:49:22 +0000 (0:00:00.318) 0:00:29.545 ********* 2025-07-12 15:51:41.681064 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-07-12 15:51:41.681075 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-07-12 15:51:41.681085 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-07-12 15:51:41.681096 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-07-12 15:51:41.681107 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-07-12 15:51:41.681117 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-07-12 15:51:41.681128 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-07-12 15:51:41.681139 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-07-12 15:51:41.681150 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-07-12 15:51:41.681160 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-07-12 15:51:41.681171 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-07-12 15:51:41.681181 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-07-12 15:51:41.681192 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-07-12 15:51:41.681203 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-07-12 15:51:41.681213 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-07-12 15:51:41.681224 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-12 15:51:41.681235 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-12 15:51:41.681246 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-12 15:51:41.681256 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-12 15:51:41.681267 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-12 15:51:41.681277 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-12 15:51:41.681288 | orchestrator | 2025-07-12 15:51:41.681299 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-07-12 15:51:41.681309 | orchestrator | Saturday 12 July 2025 15:49:31 +0000 (0:00:08.759) 0:00:38.305 ********* 2025-07-12 15:51:41.681326 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-12 15:51:41.681336 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-12 15:51:41.681347 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-12 15:51:41.681358 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-12 15:51:41.681368 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-12 15:51:41.681379 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-12 15:51:41.681389 | orchestrator | 2025-07-12 15:51:41.681400 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-07-12 15:51:41.681504 | orchestrator | Saturday 12 July 2025 15:49:33 +0000 (0:00:02.477) 0:00:40.782 ********* 2025-07-12 15:51:41.681538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 15:51:41.681553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 15:51:41.681566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 15:51:41.681586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 15:51:41.681598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 15:51:41.681622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 15:51:41.681634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 15:51:41.681646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 15:51:41.681658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 15:51:41.681669 | orchestrator | 2025-07-12 15:51:41.681681 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-12 15:51:41.681698 | orchestrator | Saturday 12 July 2025 15:49:35 +0000 (0:00:02.330) 0:00:43.113 ********* 2025-07-12 15:51:41.681709 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:51:41.681720 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:51:41.681731 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:51:41.681741 | orchestrator | 2025-07-12 15:51:41.681752 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-07-12 15:51:41.681782 | orchestrator | Saturday 12 July 2025 15:49:36 +0000 (0:00:00.323) 0:00:43.437 ********* 2025-07-12 15:51:41.681794 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:51:41.681804 | orchestrator | 2025-07-12 15:51:41.681815 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-07-12 15:51:41.681826 | orchestrator | Saturday 12 July 2025 15:49:38 +0000 (0:00:02.333) 0:00:45.771 ********* 2025-07-12 15:51:41.681837 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:51:41.681847 | orchestrator | 2025-07-12 15:51:41.681858 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-07-12 15:51:41.681869 | orchestrator | Saturday 12 July 2025 15:49:41 +0000 (0:00:02.837) 0:00:48.608 ********* 2025-07-12 15:51:41.681879 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:51:41.681890 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:51:41.681900 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:51:41.681911 | orchestrator | 2025-07-12 15:51:41.681921 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-07-12 15:51:41.681932 | orchestrator | Saturday 12 July 2025 15:49:42 +0000 (0:00:01.065) 0:00:49.674 ********* 2025-07-12 15:51:41.681942 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:51:41.681953 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:51:41.681964 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:51:41.681974 | orchestrator | 2025-07-12 15:51:41.681985 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-07-12 15:51:41.681995 | orchestrator | Saturday 12 July 2025 15:49:42 +0000 (0:00:00.309) 0:00:49.984 ********* 2025-07-12 15:51:41.682006 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:51:41.682065 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:51:41.682080 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:51:41.682091 | orchestrator | 2025-07-12 15:51:41.682101 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-07-12 15:51:41.682112 | orchestrator | Saturday 12 July 2025 15:49:43 +0000 (0:00:00.404) 0:00:50.388 ********* 2025-07-12 15:51:41.682123 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:51:41.682133 | orchestrator | 2025-07-12 15:51:41.682144 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-07-12 15:51:41.682163 | orchestrator | Saturday 12 July 2025 15:49:57 +0000 (0:00:14.487) 0:01:04.875 ********* 2025-07-12 15:51:41.682176 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:51:41.682186 | orchestrator | 2025-07-12 15:51:41.682197 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-07-12 15:51:41.682208 | orchestrator | Saturday 12 July 2025 15:50:07 +0000 (0:00:10.211) 0:01:15.087 ********* 2025-07-12 15:51:41.682218 | orchestrator | 2025-07-12 15:51:41.682229 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-07-12 15:51:41.682245 | orchestrator | Saturday 12 July 2025 15:50:08 +0000 (0:00:00.290) 0:01:15.377 ********* 2025-07-12 15:51:41.682256 | orchestrator | 2025-07-12 15:51:41.682267 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-07-12 15:51:41.682278 | orchestrator | Saturday 12 July 2025 15:50:08 +0000 (0:00:00.069) 0:01:15.446 ********* 2025-07-12 15:51:41.682288 | orchestrator | 2025-07-12 15:51:41.682299 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-07-12 15:51:41.682310 | orchestrator | Saturday 12 July 2025 15:50:08 +0000 (0:00:00.068) 0:01:15.514 ********* 2025-07-12 15:51:41.682320 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:51:41.682331 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:51:41.682341 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:51:41.682359 | orchestrator | 2025-07-12 15:51:41.682370 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-07-12 15:51:41.682380 | orchestrator | Saturday 12 July 2025 15:50:31 +0000 (0:00:23.519) 0:01:39.034 ********* 2025-07-12 15:51:41.682391 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:51:41.682401 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:51:41.682412 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:51:41.682422 | orchestrator | 2025-07-12 15:51:41.682433 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-07-12 15:51:41.682444 | orchestrator | Saturday 12 July 2025 15:50:41 +0000 (0:00:09.558) 0:01:48.593 ********* 2025-07-12 15:51:41.682454 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:51:41.682465 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:51:41.682475 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:51:41.682486 | orchestrator | 2025-07-12 15:51:41.682496 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-12 15:51:41.682507 | orchestrator | Saturday 12 July 2025 15:50:53 +0000 (0:00:11.792) 0:02:00.385 ********* 2025-07-12 15:51:41.682517 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:51:41.682528 | orchestrator | 2025-07-12 15:51:41.682538 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-07-12 15:51:41.682549 | orchestrator | Saturday 12 July 2025 15:50:53 +0000 (0:00:00.716) 0:02:01.102 ********* 2025-07-12 15:51:41.682560 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:51:41.682570 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:51:41.682581 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:51:41.682591 | orchestrator | 2025-07-12 15:51:41.682602 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-07-12 15:51:41.682612 | orchestrator | Saturday 12 July 2025 15:50:54 +0000 (0:00:00.912) 0:02:02.015 ********* 2025-07-12 15:51:41.682623 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:51:41.682634 | orchestrator | 2025-07-12 15:51:41.682644 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-07-12 15:51:41.682655 | orchestrator | Saturday 12 July 2025 15:50:56 +0000 (0:00:01.796) 0:02:03.811 ********* 2025-07-12 15:51:41.682665 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-07-12 15:51:41.682677 | orchestrator | 2025-07-12 15:51:41.682688 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-07-12 15:51:41.682698 | orchestrator | Saturday 12 July 2025 15:51:05 +0000 (0:00:09.426) 0:02:13.238 ********* 2025-07-12 15:51:41.682709 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-07-12 15:51:41.682719 | orchestrator | 2025-07-12 15:51:41.682730 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-07-12 15:51:41.682740 | orchestrator | Saturday 12 July 2025 15:51:26 +0000 (0:00:20.742) 0:02:33.980 ********* 2025-07-12 15:51:41.682751 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-07-12 15:51:41.682810 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-07-12 15:51:41.682825 | orchestrator | 2025-07-12 15:51:41.682837 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-07-12 15:51:41.682848 | orchestrator | Saturday 12 July 2025 15:51:33 +0000 (0:00:06.530) 0:02:40.511 ********* 2025-07-12 15:51:41.682860 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:51:41.682872 | orchestrator | 2025-07-12 15:51:41.682883 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-07-12 15:51:41.682895 | orchestrator | Saturday 12 July 2025 15:51:33 +0000 (0:00:00.452) 0:02:40.963 ********* 2025-07-12 15:51:41.682906 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:51:41.682918 | orchestrator | 2025-07-12 15:51:41.682929 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-07-12 15:51:41.682941 | orchestrator | Saturday 12 July 2025 15:51:33 +0000 (0:00:00.179) 0:02:41.142 ********* 2025-07-12 15:51:41.682959 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:51:41.682971 | orchestrator | 2025-07-12 15:51:41.682982 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-07-12 15:51:41.682994 | orchestrator | Saturday 12 July 2025 15:51:34 +0000 (0:00:00.238) 0:02:41.381 ********* 2025-07-12 15:51:41.683005 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:51:41.683017 | orchestrator | 2025-07-12 15:51:41.683028 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-07-12 15:51:41.683040 | orchestrator | Saturday 12 July 2025 15:51:35 +0000 (0:00:01.081) 0:02:42.462 ********* 2025-07-12 15:51:41.683051 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:51:41.683063 | orchestrator | 2025-07-12 15:51:41.683075 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-12 15:51:41.683095 | orchestrator | Saturday 12 July 2025 15:51:38 +0000 (0:00:03.537) 0:02:46.000 ********* 2025-07-12 15:51:41.683107 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:51:41.683118 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:51:41.683129 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:51:41.683139 | orchestrator | 2025-07-12 15:51:41.683150 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:51:41.683165 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-07-12 15:51:41.683177 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-07-12 15:51:41.683188 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-07-12 15:51:41.683199 | orchestrator | 2025-07-12 15:51:41.683209 | orchestrator | 2025-07-12 15:51:41.683220 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:51:41.683231 | orchestrator | Saturday 12 July 2025 15:51:39 +0000 (0:00:00.755) 0:02:46.755 ********* 2025-07-12 15:51:41.683242 | orchestrator | =============================================================================== 2025-07-12 15:51:41.683252 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 23.52s 2025-07-12 15:51:41.683263 | orchestrator | service-ks-register : keystone | Creating services --------------------- 20.74s 2025-07-12 15:51:41.683274 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.49s 2025-07-12 15:51:41.683284 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.79s 2025-07-12 15:51:41.683295 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.21s 2025-07-12 15:51:41.683305 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.56s 2025-07-12 15:51:41.683316 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint ---- 9.43s 2025-07-12 15:51:41.683327 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.76s 2025-07-12 15:51:41.683337 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.53s 2025-07-12 15:51:41.683348 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.91s 2025-07-12 15:51:41.683358 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.62s 2025-07-12 15:51:41.683368 | orchestrator | keystone : Creating default user role ----------------------------------- 3.54s 2025-07-12 15:51:41.683377 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.49s 2025-07-12 15:51:41.683386 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.84s 2025-07-12 15:51:41.683396 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.48s 2025-07-12 15:51:41.683405 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.33s 2025-07-12 15:51:41.683415 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.33s 2025-07-12 15:51:41.683430 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.21s 2025-07-12 15:51:41.683440 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.20s 2025-07-12 15:51:41.683449 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.88s 2025-07-12 15:51:41.683459 | orchestrator | 2025-07-12 15:51:41 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:51:41.683469 | orchestrator | 2025-07-12 15:51:41 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:51:41.683478 | orchestrator | 2025-07-12 15:51:41 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:51:41.683488 | orchestrator | 2025-07-12 15:51:41 | INFO  | Task 23db63a9-8adf-419f-adb5-d181291b702c is in state STARTED 2025-07-12 15:51:41.683497 | orchestrator | 2025-07-12 15:51:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:51:44.715858 | orchestrator | 2025-07-12 15:51:44 | INFO  | Task f25a2832-4daa-4e13-8873-06b2417f9db4 is in state STARTED 2025-07-12 15:51:44.716485 | orchestrator | 2025-07-12 15:51:44 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:51:44.717369 | orchestrator | 2025-07-12 15:51:44 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:51:44.718278 | orchestrator | 2025-07-12 15:51:44 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:51:44.718554 | orchestrator | 2025-07-12 15:51:44 | INFO  | Task 23db63a9-8adf-419f-adb5-d181291b702c is in state STARTED 2025-07-12 15:51:44.718698 | orchestrator | 2025-07-12 15:51:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:51:47.770827 | orchestrator | 2025-07-12 15:51:47 | INFO  | Task f25a2832-4daa-4e13-8873-06b2417f9db4 is in state STARTED 2025-07-12 15:51:47.770970 | orchestrator | 2025-07-12 15:51:47 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:51:47.772027 | orchestrator | 2025-07-12 15:51:47 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:51:47.772795 | orchestrator | 2025-07-12 15:51:47 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:51:47.773857 | orchestrator | 2025-07-12 15:51:47 | INFO  | Task 23db63a9-8adf-419f-adb5-d181291b702c is in state STARTED 2025-07-12 15:51:47.773910 | orchestrator | 2025-07-12 15:51:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:51:50.815098 | orchestrator | 2025-07-12 15:51:50 | INFO  | Task f25a2832-4daa-4e13-8873-06b2417f9db4 is in state STARTED 2025-07-12 15:51:50.816584 | orchestrator | 2025-07-12 15:51:50 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:51:50.817008 | orchestrator | 2025-07-12 15:51:50 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:51:50.818336 | orchestrator | 2025-07-12 15:51:50 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:51:50.819579 | orchestrator | 2025-07-12 15:51:50 | INFO  | Task 23db63a9-8adf-419f-adb5-d181291b702c is in state STARTED 2025-07-12 15:51:50.819618 | orchestrator | 2025-07-12 15:51:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:51:53.885229 | orchestrator | 2025-07-12 15:51:53 | INFO  | Task f25a2832-4daa-4e13-8873-06b2417f9db4 is in state STARTED 2025-07-12 15:51:53.886368 | orchestrator | 2025-07-12 15:51:53 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:51:53.888932 | orchestrator | 2025-07-12 15:51:53 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:51:53.891810 | orchestrator | 2025-07-12 15:51:53 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:51:53.893620 | orchestrator | 2025-07-12 15:51:53 | INFO  | Task 23db63a9-8adf-419f-adb5-d181291b702c is in state STARTED 2025-07-12 15:51:53.893657 | orchestrator | 2025-07-12 15:51:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:51:56.940468 | orchestrator | 2025-07-12 15:51:56 | INFO  | Task f25a2832-4daa-4e13-8873-06b2417f9db4 is in state STARTED 2025-07-12 15:51:56.940583 | orchestrator | 2025-07-12 15:51:56 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:51:56.940924 | orchestrator | 2025-07-12 15:51:56 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:51:56.943224 | orchestrator | 2025-07-12 15:51:56 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:51:56.943419 | orchestrator | 2025-07-12 15:51:56 | INFO  | Task 23db63a9-8adf-419f-adb5-d181291b702c is in state STARTED 2025-07-12 15:51:56.943443 | orchestrator | 2025-07-12 15:51:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:51:59.986159 | orchestrator | 2025-07-12 15:51:59 | INFO  | Task f25a2832-4daa-4e13-8873-06b2417f9db4 is in state STARTED 2025-07-12 15:51:59.987200 | orchestrator | 2025-07-12 15:51:59 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:51:59.987810 | orchestrator | 2025-07-12 15:51:59 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:51:59.988573 | orchestrator | 2025-07-12 15:51:59 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:51:59.990183 | orchestrator | 2025-07-12 15:51:59 | INFO  | Task 23db63a9-8adf-419f-adb5-d181291b702c is in state STARTED 2025-07-12 15:51:59.990212 | orchestrator | 2025-07-12 15:51:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:52:03.030306 | orchestrator | 2025-07-12 15:52:03 | INFO  | Task f25a2832-4daa-4e13-8873-06b2417f9db4 is in state STARTED 2025-07-12 15:52:03.032487 | orchestrator | 2025-07-12 15:52:03 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:52:03.032566 | orchestrator | 2025-07-12 15:52:03 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:52:03.032582 | orchestrator | 2025-07-12 15:52:03 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:52:03.032595 | orchestrator | 2025-07-12 15:52:03 | INFO  | Task 23db63a9-8adf-419f-adb5-d181291b702c is in state STARTED 2025-07-12 15:52:03.032608 | orchestrator | 2025-07-12 15:52:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:52:06.055162 | orchestrator | 2025-07-12 15:52:06 | INFO  | Task f25a2832-4daa-4e13-8873-06b2417f9db4 is in state STARTED 2025-07-12 15:52:06.055256 | orchestrator | 2025-07-12 15:52:06 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:52:06.056103 | orchestrator | 2025-07-12 15:52:06 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:52:06.056764 | orchestrator | 2025-07-12 15:52:06 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:52:06.058567 | orchestrator | 2025-07-12 15:52:06 | INFO  | Task 23db63a9-8adf-419f-adb5-d181291b702c is in state STARTED 2025-07-12 15:52:06.058597 | orchestrator | 2025-07-12 15:52:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:52:09.088383 | orchestrator | 2025-07-12 15:52:09 | INFO  | Task f25a2832-4daa-4e13-8873-06b2417f9db4 is in state STARTED 2025-07-12 15:52:09.089641 | orchestrator | 2025-07-12 15:52:09 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:52:09.090486 | orchestrator | 2025-07-12 15:52:09 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:52:09.092668 | orchestrator | 2025-07-12 15:52:09 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:52:09.094503 | orchestrator | 2025-07-12 15:52:09 | INFO  | Task 23db63a9-8adf-419f-adb5-d181291b702c is in state STARTED 2025-07-12 15:52:09.094529 | orchestrator | 2025-07-12 15:52:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:52:12.139907 | orchestrator | 2025-07-12 15:52:12 | INFO  | Task f25a2832-4daa-4e13-8873-06b2417f9db4 is in state STARTED 2025-07-12 15:52:12.140632 | orchestrator | 2025-07-12 15:52:12 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:52:12.142714 | orchestrator | 2025-07-12 15:52:12 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:52:12.143358 | orchestrator | 2025-07-12 15:52:12 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:52:12.144113 | orchestrator | 2025-07-12 15:52:12 | INFO  | Task 23db63a9-8adf-419f-adb5-d181291b702c is in state STARTED 2025-07-12 15:52:12.145380 | orchestrator | 2025-07-12 15:52:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:52:15.186413 | orchestrator | 2025-07-12 15:52:15 | INFO  | Task f25a2832-4daa-4e13-8873-06b2417f9db4 is in state STARTED 2025-07-12 15:52:15.189803 | orchestrator | 2025-07-12 15:52:15 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:52:15.190899 | orchestrator | 2025-07-12 15:52:15 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:52:15.191677 | orchestrator | 2025-07-12 15:52:15 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:52:15.192816 | orchestrator | 2025-07-12 15:52:15 | INFO  | Task 23db63a9-8adf-419f-adb5-d181291b702c is in state SUCCESS 2025-07-12 15:52:15.192827 | orchestrator | 2025-07-12 15:52:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:52:18.232242 | orchestrator | 2025-07-12 15:52:18 | INFO  | Task f25a2832-4daa-4e13-8873-06b2417f9db4 is in state STARTED 2025-07-12 15:52:18.232328 | orchestrator | 2025-07-12 15:52:18 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:52:18.232343 | orchestrator | 2025-07-12 15:52:18 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:52:18.232355 | orchestrator | 2025-07-12 15:52:18 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:52:18.234087 | orchestrator | 2025-07-12 15:52:18 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:52:18.234116 | orchestrator | 2025-07-12 15:52:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:52:21.271394 | orchestrator | 2025-07-12 15:52:21 | INFO  | Task f25a2832-4daa-4e13-8873-06b2417f9db4 is in state STARTED 2025-07-12 15:52:21.271736 | orchestrator | 2025-07-12 15:52:21 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:52:21.272494 | orchestrator | 2025-07-12 15:52:21 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:52:21.279694 | orchestrator | 2025-07-12 15:52:21 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:52:21.279729 | orchestrator | 2025-07-12 15:52:21 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:52:21.279770 | orchestrator | 2025-07-12 15:52:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:52:24.297312 | orchestrator | 2025-07-12 15:52:24 | INFO  | Task f25a2832-4daa-4e13-8873-06b2417f9db4 is in state STARTED 2025-07-12 15:52:24.297481 | orchestrator | 2025-07-12 15:52:24 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:52:24.297936 | orchestrator | 2025-07-12 15:52:24 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:52:24.298716 | orchestrator | 2025-07-12 15:52:24 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:52:24.299171 | orchestrator | 2025-07-12 15:52:24 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:52:24.299194 | orchestrator | 2025-07-12 15:52:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:52:27.324547 | orchestrator | 2025-07-12 15:52:27 | INFO  | Task f25a2832-4daa-4e13-8873-06b2417f9db4 is in state STARTED 2025-07-12 15:52:27.325149 | orchestrator | 2025-07-12 15:52:27 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:52:27.325759 | orchestrator | 2025-07-12 15:52:27 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:52:27.326871 | orchestrator | 2025-07-12 15:52:27 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:52:27.327529 | orchestrator | 2025-07-12 15:52:27 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:52:27.327977 | orchestrator | 2025-07-12 15:52:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:52:30.353178 | orchestrator | 2025-07-12 15:52:30 | INFO  | Task f25a2832-4daa-4e13-8873-06b2417f9db4 is in state STARTED 2025-07-12 15:52:30.353898 | orchestrator | 2025-07-12 15:52:30 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:52:30.354583 | orchestrator | 2025-07-12 15:52:30 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:52:30.355316 | orchestrator | 2025-07-12 15:52:30 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:52:30.358720 | orchestrator | 2025-07-12 15:52:30 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:52:30.359402 | orchestrator | 2025-07-12 15:52:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:52:33.383471 | orchestrator | 2025-07-12 15:52:33 | INFO  | Task f25a2832-4daa-4e13-8873-06b2417f9db4 is in state STARTED 2025-07-12 15:52:33.383674 | orchestrator | 2025-07-12 15:52:33 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:52:33.384169 | orchestrator | 2025-07-12 15:52:33 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:52:33.384717 | orchestrator | 2025-07-12 15:52:33 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:52:33.385363 | orchestrator | 2025-07-12 15:52:33 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:52:33.385387 | orchestrator | 2025-07-12 15:52:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:52:36.416449 | orchestrator | 2025-07-12 15:52:36 | INFO  | Task f25a2832-4daa-4e13-8873-06b2417f9db4 is in state STARTED 2025-07-12 15:52:36.416641 | orchestrator | 2025-07-12 15:52:36 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:52:36.417224 | orchestrator | 2025-07-12 15:52:36 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:52:36.417737 | orchestrator | 2025-07-12 15:52:36 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:52:36.419011 | orchestrator | 2025-07-12 15:52:36 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:52:36.419038 | orchestrator | 2025-07-12 15:52:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:52:39.439569 | orchestrator | 2025-07-12 15:52:39 | INFO  | Task f25a2832-4daa-4e13-8873-06b2417f9db4 is in state STARTED 2025-07-12 15:52:39.440308 | orchestrator | 2025-07-12 15:52:39 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:52:39.440702 | orchestrator | 2025-07-12 15:52:39 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:52:39.441477 | orchestrator | 2025-07-12 15:52:39 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:52:39.441892 | orchestrator | 2025-07-12 15:52:39 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:52:39.441915 | orchestrator | 2025-07-12 15:52:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:52:42.474848 | orchestrator | 2025-07-12 15:52:42 | INFO  | Task f25a2832-4daa-4e13-8873-06b2417f9db4 is in state STARTED 2025-07-12 15:52:42.474937 | orchestrator | 2025-07-12 15:52:42 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:52:42.476635 | orchestrator | 2025-07-12 15:52:42 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:52:42.479064 | orchestrator | 2025-07-12 15:52:42 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:52:42.480822 | orchestrator | 2025-07-12 15:52:42 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:52:42.481083 | orchestrator | 2025-07-12 15:52:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:52:45.505198 | orchestrator | 2025-07-12 15:52:45 | INFO  | Task f25a2832-4daa-4e13-8873-06b2417f9db4 is in state STARTED 2025-07-12 15:52:45.505416 | orchestrator | 2025-07-12 15:52:45 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:52:45.505973 | orchestrator | 2025-07-12 15:52:45 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:52:45.506660 | orchestrator | 2025-07-12 15:52:45 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:52:45.507201 | orchestrator | 2025-07-12 15:52:45 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:52:45.507877 | orchestrator | 2025-07-12 15:52:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:52:48.533205 | orchestrator | 2025-07-12 15:52:48 | INFO  | Task f25a2832-4daa-4e13-8873-06b2417f9db4 is in state STARTED 2025-07-12 15:52:48.536546 | orchestrator | 2025-07-12 15:52:48 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:52:48.536573 | orchestrator | 2025-07-12 15:52:48 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:52:48.536585 | orchestrator | 2025-07-12 15:52:48 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:52:48.536595 | orchestrator | 2025-07-12 15:52:48 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:52:48.536605 | orchestrator | 2025-07-12 15:52:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:52:51.565203 | orchestrator | 2025-07-12 15:52:51 | INFO  | Task f25a2832-4daa-4e13-8873-06b2417f9db4 is in state STARTED 2025-07-12 15:52:51.566714 | orchestrator | 2025-07-12 15:52:51 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:52:51.570183 | orchestrator | 2025-07-12 15:52:51 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:52:51.570785 | orchestrator | 2025-07-12 15:52:51 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:52:51.571166 | orchestrator | 2025-07-12 15:52:51 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:52:51.571294 | orchestrator | 2025-07-12 15:52:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:52:54.595139 | orchestrator | 2025-07-12 15:52:54 | INFO  | Task f25a2832-4daa-4e13-8873-06b2417f9db4 is in state STARTED 2025-07-12 15:52:54.595325 | orchestrator | 2025-07-12 15:52:54 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:52:54.596120 | orchestrator | 2025-07-12 15:52:54 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:52:54.596428 | orchestrator | 2025-07-12 15:52:54 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:52:54.597107 | orchestrator | 2025-07-12 15:52:54 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:52:54.597129 | orchestrator | 2025-07-12 15:52:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:52:57.619735 | orchestrator | 2025-07-12 15:52:57 | INFO  | Task f25a2832-4daa-4e13-8873-06b2417f9db4 is in state STARTED 2025-07-12 15:52:57.620538 | orchestrator | 2025-07-12 15:52:57 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:52:57.622343 | orchestrator | 2025-07-12 15:52:57 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:52:57.622946 | orchestrator | 2025-07-12 15:52:57 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:52:57.623374 | orchestrator | 2025-07-12 15:52:57 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:52:57.623402 | orchestrator | 2025-07-12 15:52:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:53:00.647626 | orchestrator | 2025-07-12 15:53:00 | INFO  | Task f25a2832-4daa-4e13-8873-06b2417f9db4 is in state STARTED 2025-07-12 15:53:00.647717 | orchestrator | 2025-07-12 15:53:00 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:53:00.648340 | orchestrator | 2025-07-12 15:53:00 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:53:00.648787 | orchestrator | 2025-07-12 15:53:00 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:53:00.649560 | orchestrator | 2025-07-12 15:53:00 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:53:00.649638 | orchestrator | 2025-07-12 15:53:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:53:03.703702 | orchestrator | 2025-07-12 15:53:03.703842 | orchestrator | 2025-07-12 15:53:03.703861 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 15:53:03.703874 | orchestrator | 2025-07-12 15:53:03.703885 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 15:53:03.703897 | orchestrator | Saturday 12 July 2025 15:51:37 +0000 (0:00:00.250) 0:00:00.250 ********* 2025-07-12 15:53:03.703908 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:53:03.703919 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:53:03.703930 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:53:03.703941 | orchestrator | ok: [testbed-manager] 2025-07-12 15:53:03.703952 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:53:03.703963 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:53:03.703973 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:53:03.704006 | orchestrator | 2025-07-12 15:53:03.704018 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 15:53:03.704028 | orchestrator | Saturday 12 July 2025 15:51:38 +0000 (0:00:00.934) 0:00:01.184 ********* 2025-07-12 15:53:03.704039 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-07-12 15:53:03.704050 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-07-12 15:53:03.704060 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-07-12 15:53:03.704071 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-07-12 15:53:03.704082 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-07-12 15:53:03.704092 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-07-12 15:53:03.704103 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-07-12 15:53:03.704113 | orchestrator | 2025-07-12 15:53:03.704124 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-07-12 15:53:03.704136 | orchestrator | 2025-07-12 15:53:03.704146 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-07-12 15:53:03.704157 | orchestrator | Saturday 12 July 2025 15:51:39 +0000 (0:00:01.035) 0:00:02.220 ********* 2025-07-12 15:53:03.704169 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:53:03.704180 | orchestrator | 2025-07-12 15:53:03.704191 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-07-12 15:53:03.704201 | orchestrator | Saturday 12 July 2025 15:51:41 +0000 (0:00:01.870) 0:00:04.091 ********* 2025-07-12 15:53:03.704212 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-07-12 15:53:03.704222 | orchestrator | 2025-07-12 15:53:03.704233 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-07-12 15:53:03.704244 | orchestrator | Saturday 12 July 2025 15:51:45 +0000 (0:00:03.747) 0:00:07.838 ********* 2025-07-12 15:53:03.704255 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-07-12 15:53:03.704266 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-07-12 15:53:03.704277 | orchestrator | 2025-07-12 15:53:03.704386 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-07-12 15:53:03.704403 | orchestrator | Saturday 12 July 2025 15:51:52 +0000 (0:00:07.419) 0:00:15.258 ********* 2025-07-12 15:53:03.704416 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 15:53:03.704428 | orchestrator | 2025-07-12 15:53:03.704440 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-07-12 15:53:03.704453 | orchestrator | Saturday 12 July 2025 15:51:56 +0000 (0:00:03.686) 0:00:18.945 ********* 2025-07-12 15:53:03.704465 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 15:53:03.704478 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-07-12 15:53:03.704491 | orchestrator | 2025-07-12 15:53:03.704503 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-07-12 15:53:03.704515 | orchestrator | Saturday 12 July 2025 15:52:01 +0000 (0:00:04.514) 0:00:23.460 ********* 2025-07-12 15:53:03.704528 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 15:53:03.704541 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-07-12 15:53:03.704553 | orchestrator | 2025-07-12 15:53:03.704566 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-07-12 15:53:03.704578 | orchestrator | Saturday 12 July 2025 15:52:08 +0000 (0:00:07.332) 0:00:30.793 ********* 2025-07-12 15:53:03.704591 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-07-12 15:53:03.704603 | orchestrator | 2025-07-12 15:53:03.704617 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:53:03.704639 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:53:03.704652 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:53:03.704665 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:53:03.704691 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:53:03.704702 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:53:03.704827 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:53:03.704845 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:53:03.704856 | orchestrator | 2025-07-12 15:53:03.704867 | orchestrator | 2025-07-12 15:53:03.704878 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:53:03.704889 | orchestrator | Saturday 12 July 2025 15:52:14 +0000 (0:00:05.797) 0:00:36.590 ********* 2025-07-12 15:53:03.704899 | orchestrator | =============================================================================== 2025-07-12 15:53:03.704910 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 7.42s 2025-07-12 15:53:03.704920 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 7.33s 2025-07-12 15:53:03.704931 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.80s 2025-07-12 15:53:03.704941 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.51s 2025-07-12 15:53:03.704952 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.75s 2025-07-12 15:53:03.704962 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.69s 2025-07-12 15:53:03.704973 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.87s 2025-07-12 15:53:03.704984 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.04s 2025-07-12 15:53:03.704994 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.93s 2025-07-12 15:53:03.705005 | orchestrator | 2025-07-12 15:53:03.705015 | orchestrator | 2025-07-12 15:53:03.705026 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-07-12 15:53:03.705037 | orchestrator | 2025-07-12 15:53:03.705047 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-07-12 15:53:03.705057 | orchestrator | Saturday 12 July 2025 15:51:30 +0000 (0:00:00.263) 0:00:00.263 ********* 2025-07-12 15:53:03.705068 | orchestrator | changed: [testbed-manager] 2025-07-12 15:53:03.705079 | orchestrator | 2025-07-12 15:53:03.705089 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-07-12 15:53:03.705100 | orchestrator | Saturday 12 July 2025 15:51:32 +0000 (0:00:02.115) 0:00:02.378 ********* 2025-07-12 15:53:03.705111 | orchestrator | changed: [testbed-manager] 2025-07-12 15:53:03.705121 | orchestrator | 2025-07-12 15:53:03.705132 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-07-12 15:53:03.705143 | orchestrator | Saturday 12 July 2025 15:51:33 +0000 (0:00:01.080) 0:00:03.458 ********* 2025-07-12 15:53:03.705153 | orchestrator | changed: [testbed-manager] 2025-07-12 15:53:03.705164 | orchestrator | 2025-07-12 15:53:03.705174 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-07-12 15:53:03.705185 | orchestrator | Saturday 12 July 2025 15:51:34 +0000 (0:00:01.076) 0:00:04.535 ********* 2025-07-12 15:53:03.705195 | orchestrator | changed: [testbed-manager] 2025-07-12 15:53:03.705206 | orchestrator | 2025-07-12 15:53:03.705216 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-07-12 15:53:03.705235 | orchestrator | Saturday 12 July 2025 15:51:35 +0000 (0:00:01.120) 0:00:05.655 ********* 2025-07-12 15:53:03.705246 | orchestrator | changed: [testbed-manager] 2025-07-12 15:53:03.705256 | orchestrator | 2025-07-12 15:53:03.705267 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-07-12 15:53:03.705278 | orchestrator | Saturday 12 July 2025 15:51:36 +0000 (0:00:00.892) 0:00:06.548 ********* 2025-07-12 15:53:03.705288 | orchestrator | changed: [testbed-manager] 2025-07-12 15:53:03.705299 | orchestrator | 2025-07-12 15:53:03.705310 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-07-12 15:53:03.705320 | orchestrator | Saturday 12 July 2025 15:51:37 +0000 (0:00:00.879) 0:00:07.428 ********* 2025-07-12 15:53:03.705331 | orchestrator | changed: [testbed-manager] 2025-07-12 15:53:03.705341 | orchestrator | 2025-07-12 15:53:03.705352 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-07-12 15:53:03.705362 | orchestrator | Saturday 12 July 2025 15:51:38 +0000 (0:00:01.229) 0:00:08.657 ********* 2025-07-12 15:53:03.705373 | orchestrator | changed: [testbed-manager] 2025-07-12 15:53:03.705383 | orchestrator | 2025-07-12 15:53:03.705394 | orchestrator | TASK [Create admin user] ******************************************************* 2025-07-12 15:53:03.705404 | orchestrator | Saturday 12 July 2025 15:51:39 +0000 (0:00:00.890) 0:00:09.547 ********* 2025-07-12 15:53:03.705415 | orchestrator | changed: [testbed-manager] 2025-07-12 15:53:03.705426 | orchestrator | 2025-07-12 15:53:03.705556 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-07-12 15:53:03.705572 | orchestrator | Saturday 12 July 2025 15:52:37 +0000 (0:00:57.996) 0:01:07.544 ********* 2025-07-12 15:53:03.705585 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:53:03.705597 | orchestrator | 2025-07-12 15:53:03.705610 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-07-12 15:53:03.705622 | orchestrator | 2025-07-12 15:53:03.705634 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-07-12 15:53:03.705646 | orchestrator | Saturday 12 July 2025 15:52:37 +0000 (0:00:00.114) 0:01:07.658 ********* 2025-07-12 15:53:03.705658 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:53:03.705670 | orchestrator | 2025-07-12 15:53:03.705683 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-07-12 15:53:03.705695 | orchestrator | 2025-07-12 15:53:03.705711 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-07-12 15:53:03.705739 | orchestrator | Saturday 12 July 2025 15:52:49 +0000 (0:00:11.784) 0:01:19.443 ********* 2025-07-12 15:53:03.705760 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:53:03.705780 | orchestrator | 2025-07-12 15:53:03.705820 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-07-12 15:53:03.705839 | orchestrator | 2025-07-12 15:53:03.705857 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-07-12 15:53:03.705868 | orchestrator | Saturday 12 July 2025 15:52:50 +0000 (0:00:01.124) 0:01:20.568 ********* 2025-07-12 15:53:03.705879 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:53:03.705889 | orchestrator | 2025-07-12 15:53:03.705909 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:53:03.705921 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 15:53:03.705932 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:53:03.705943 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:53:03.705954 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:53:03.705965 | orchestrator | 2025-07-12 15:53:03.705985 | orchestrator | 2025-07-12 15:53:03.706001 | orchestrator | 2025-07-12 15:53:03.706164 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:53:03.706178 | orchestrator | Saturday 12 July 2025 15:53:02 +0000 (0:00:11.170) 0:01:31.738 ********* 2025-07-12 15:53:03.706189 | orchestrator | =============================================================================== 2025-07-12 15:53:03.706200 | orchestrator | Create admin user ------------------------------------------------------ 58.00s 2025-07-12 15:53:03.706210 | orchestrator | Restart ceph manager service ------------------------------------------- 24.08s 2025-07-12 15:53:03.706221 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.12s 2025-07-12 15:53:03.706232 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.23s 2025-07-12 15:53:03.706242 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.12s 2025-07-12 15:53:03.706253 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.08s 2025-07-12 15:53:03.706264 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.08s 2025-07-12 15:53:03.706274 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.89s 2025-07-12 15:53:03.706285 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 0.89s 2025-07-12 15:53:03.706296 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.88s 2025-07-12 15:53:03.706306 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.11s 2025-07-12 15:53:03.706317 | orchestrator | 2025-07-12 15:53:03 | INFO  | Task f25a2832-4daa-4e13-8873-06b2417f9db4 is in state SUCCESS 2025-07-12 15:53:03.706328 | orchestrator | 2025-07-12 15:53:03 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:53:03.706339 | orchestrator | 2025-07-12 15:53:03 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:53:03.706349 | orchestrator | 2025-07-12 15:53:03 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:53:03.706360 | orchestrator | 2025-07-12 15:53:03 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:53:03.706371 | orchestrator | 2025-07-12 15:53:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:53:06.727015 | orchestrator | 2025-07-12 15:53:06 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:53:06.728833 | orchestrator | 2025-07-12 15:53:06 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:53:06.729200 | orchestrator | 2025-07-12 15:53:06 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:53:06.729789 | orchestrator | 2025-07-12 15:53:06 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:53:06.731041 | orchestrator | 2025-07-12 15:53:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:53:09.761170 | orchestrator | 2025-07-12 15:53:09 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:53:09.761406 | orchestrator | 2025-07-12 15:53:09 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:53:09.762148 | orchestrator | 2025-07-12 15:53:09 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:53:09.763048 | orchestrator | 2025-07-12 15:53:09 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:53:09.763082 | orchestrator | 2025-07-12 15:53:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:53:12.798408 | orchestrator | 2025-07-12 15:53:12 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:53:12.798495 | orchestrator | 2025-07-12 15:53:12 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:53:12.799225 | orchestrator | 2025-07-12 15:53:12 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:53:12.799703 | orchestrator | 2025-07-12 15:53:12 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:53:12.799725 | orchestrator | 2025-07-12 15:53:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:53:15.827267 | orchestrator | 2025-07-12 15:53:15 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:53:15.829519 | orchestrator | 2025-07-12 15:53:15 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:53:15.829557 | orchestrator | 2025-07-12 15:53:15 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:53:15.830688 | orchestrator | 2025-07-12 15:53:15 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:53:15.830716 | orchestrator | 2025-07-12 15:53:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:53:18.872825 | orchestrator | 2025-07-12 15:53:18 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:53:18.873571 | orchestrator | 2025-07-12 15:53:18 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:53:18.875663 | orchestrator | 2025-07-12 15:53:18 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:53:18.878186 | orchestrator | 2025-07-12 15:53:18 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:53:18.878236 | orchestrator | 2025-07-12 15:53:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:53:21.923708 | orchestrator | 2025-07-12 15:53:21 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:53:21.925073 | orchestrator | 2025-07-12 15:53:21 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:53:21.927101 | orchestrator | 2025-07-12 15:53:21 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:53:21.928201 | orchestrator | 2025-07-12 15:53:21 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:53:21.928226 | orchestrator | 2025-07-12 15:53:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:53:24.968523 | orchestrator | 2025-07-12 15:53:24 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:53:24.970423 | orchestrator | 2025-07-12 15:53:24 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:53:24.971120 | orchestrator | 2025-07-12 15:53:24 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:53:24.972244 | orchestrator | 2025-07-12 15:53:24 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:53:24.972510 | orchestrator | 2025-07-12 15:53:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:53:28.011354 | orchestrator | 2025-07-12 15:53:28 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:53:28.011465 | orchestrator | 2025-07-12 15:53:28 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:53:28.011519 | orchestrator | 2025-07-12 15:53:28 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:53:28.012375 | orchestrator | 2025-07-12 15:53:28 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:53:28.012395 | orchestrator | 2025-07-12 15:53:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:53:31.046410 | orchestrator | 2025-07-12 15:53:31 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:53:31.050276 | orchestrator | 2025-07-12 15:53:31 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:53:31.056583 | orchestrator | 2025-07-12 15:53:31 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:53:31.060043 | orchestrator | 2025-07-12 15:53:31 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:53:31.060225 | orchestrator | 2025-07-12 15:53:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:53:34.102473 | orchestrator | 2025-07-12 15:53:34 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:53:34.103682 | orchestrator | 2025-07-12 15:53:34 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:53:34.106237 | orchestrator | 2025-07-12 15:53:34 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:53:34.108387 | orchestrator | 2025-07-12 15:53:34 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:53:34.108440 | orchestrator | 2025-07-12 15:53:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:53:37.156587 | orchestrator | 2025-07-12 15:53:37 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:53:37.161174 | orchestrator | 2025-07-12 15:53:37 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:53:37.162597 | orchestrator | 2025-07-12 15:53:37 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:53:37.164351 | orchestrator | 2025-07-12 15:53:37 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:53:37.164388 | orchestrator | 2025-07-12 15:53:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:53:40.207228 | orchestrator | 2025-07-12 15:53:40 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:53:40.211278 | orchestrator | 2025-07-12 15:53:40 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:53:40.215154 | orchestrator | 2025-07-12 15:53:40 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:53:40.218306 | orchestrator | 2025-07-12 15:53:40 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:53:40.218628 | orchestrator | 2025-07-12 15:53:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:53:43.263512 | orchestrator | 2025-07-12 15:53:43 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:53:43.266141 | orchestrator | 2025-07-12 15:53:43 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:53:43.269062 | orchestrator | 2025-07-12 15:53:43 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:53:43.271257 | orchestrator | 2025-07-12 15:53:43 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:53:43.271333 | orchestrator | 2025-07-12 15:53:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:53:46.320614 | orchestrator | 2025-07-12 15:53:46 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:53:46.321231 | orchestrator | 2025-07-12 15:53:46 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:53:46.322392 | orchestrator | 2025-07-12 15:53:46 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:53:46.324919 | orchestrator | 2025-07-12 15:53:46 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:53:46.324935 | orchestrator | 2025-07-12 15:53:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:53:49.370547 | orchestrator | 2025-07-12 15:53:49 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:53:49.372614 | orchestrator | 2025-07-12 15:53:49 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:53:49.373582 | orchestrator | 2025-07-12 15:53:49 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:53:49.375346 | orchestrator | 2025-07-12 15:53:49 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:53:49.375375 | orchestrator | 2025-07-12 15:53:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:53:52.403345 | orchestrator | 2025-07-12 15:53:52 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:53:52.404110 | orchestrator | 2025-07-12 15:53:52 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:53:52.405398 | orchestrator | 2025-07-12 15:53:52 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:53:52.406241 | orchestrator | 2025-07-12 15:53:52 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:53:52.406271 | orchestrator | 2025-07-12 15:53:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:53:55.443227 | orchestrator | 2025-07-12 15:53:55 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:53:55.445221 | orchestrator | 2025-07-12 15:53:55 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:53:55.446598 | orchestrator | 2025-07-12 15:53:55 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:53:55.449109 | orchestrator | 2025-07-12 15:53:55 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:53:55.449183 | orchestrator | 2025-07-12 15:53:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:53:58.489271 | orchestrator | 2025-07-12 15:53:58 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:53:58.489432 | orchestrator | 2025-07-12 15:53:58 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:53:58.490172 | orchestrator | 2025-07-12 15:53:58 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:53:58.490947 | orchestrator | 2025-07-12 15:53:58 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:53:58.492032 | orchestrator | 2025-07-12 15:53:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:54:01.515101 | orchestrator | 2025-07-12 15:54:01 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:54:01.516328 | orchestrator | 2025-07-12 15:54:01 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:54:01.519630 | orchestrator | 2025-07-12 15:54:01 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:54:01.522640 | orchestrator | 2025-07-12 15:54:01 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:54:01.523421 | orchestrator | 2025-07-12 15:54:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:54:04.554207 | orchestrator | 2025-07-12 15:54:04 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:54:04.554870 | orchestrator | 2025-07-12 15:54:04 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:54:04.555520 | orchestrator | 2025-07-12 15:54:04 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:54:04.558448 | orchestrator | 2025-07-12 15:54:04 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:54:04.558484 | orchestrator | 2025-07-12 15:54:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:54:07.587075 | orchestrator | 2025-07-12 15:54:07 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:54:07.589575 | orchestrator | 2025-07-12 15:54:07 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:54:07.590234 | orchestrator | 2025-07-12 15:54:07 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:54:07.592628 | orchestrator | 2025-07-12 15:54:07 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:54:07.592661 | orchestrator | 2025-07-12 15:54:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:54:10.615695 | orchestrator | 2025-07-12 15:54:10 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:54:10.616168 | orchestrator | 2025-07-12 15:54:10 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:54:10.616900 | orchestrator | 2025-07-12 15:54:10 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:54:10.617391 | orchestrator | 2025-07-12 15:54:10 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:54:10.617544 | orchestrator | 2025-07-12 15:54:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:54:13.655007 | orchestrator | 2025-07-12 15:54:13 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:54:13.655597 | orchestrator | 2025-07-12 15:54:13 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:54:13.657092 | orchestrator | 2025-07-12 15:54:13 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:54:13.657490 | orchestrator | 2025-07-12 15:54:13 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:54:13.657692 | orchestrator | 2025-07-12 15:54:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:54:16.699258 | orchestrator | 2025-07-12 15:54:16 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:54:16.701990 | orchestrator | 2025-07-12 15:54:16 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:54:16.703583 | orchestrator | 2025-07-12 15:54:16 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:54:16.704687 | orchestrator | 2025-07-12 15:54:16 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:54:16.705140 | orchestrator | 2025-07-12 15:54:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:54:19.737472 | orchestrator | 2025-07-12 15:54:19 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:54:19.738418 | orchestrator | 2025-07-12 15:54:19 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:54:19.739393 | orchestrator | 2025-07-12 15:54:19 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:54:19.741330 | orchestrator | 2025-07-12 15:54:19 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:54:19.742140 | orchestrator | 2025-07-12 15:54:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:54:22.788639 | orchestrator | 2025-07-12 15:54:22 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:54:22.789530 | orchestrator | 2025-07-12 15:54:22 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:54:22.791251 | orchestrator | 2025-07-12 15:54:22 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:54:22.792680 | orchestrator | 2025-07-12 15:54:22 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:54:22.792835 | orchestrator | 2025-07-12 15:54:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:54:25.836376 | orchestrator | 2025-07-12 15:54:25 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:54:25.839683 | orchestrator | 2025-07-12 15:54:25 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:54:25.840269 | orchestrator | 2025-07-12 15:54:25 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:54:25.841277 | orchestrator | 2025-07-12 15:54:25 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:54:25.841651 | orchestrator | 2025-07-12 15:54:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:54:28.885560 | orchestrator | 2025-07-12 15:54:28 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:54:28.887972 | orchestrator | 2025-07-12 15:54:28 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:54:28.890085 | orchestrator | 2025-07-12 15:54:28 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:54:28.893156 | orchestrator | 2025-07-12 15:54:28 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:54:28.893247 | orchestrator | 2025-07-12 15:54:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:54:31.934936 | orchestrator | 2025-07-12 15:54:31 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:54:31.936041 | orchestrator | 2025-07-12 15:54:31 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:54:31.937854 | orchestrator | 2025-07-12 15:54:31 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:54:31.940394 | orchestrator | 2025-07-12 15:54:31 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state STARTED 2025-07-12 15:54:31.940757 | orchestrator | 2025-07-12 15:54:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:54:34.991716 | orchestrator | 2025-07-12 15:54:34 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:54:34.993699 | orchestrator | 2025-07-12 15:54:34 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:54:34.996760 | orchestrator | 2025-07-12 15:54:34 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:54:34.998956 | orchestrator | 2025-07-12 15:54:34 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:54:35.002160 | orchestrator | 2025-07-12 15:54:34 | INFO  | Task 398c87c4-0835-4178-98af-4d816908d375 is in state SUCCESS 2025-07-12 15:54:35.002280 | orchestrator | 2025-07-12 15:54:35.004836 | orchestrator | 2025-07-12 15:54:35.004916 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 15:54:35.004937 | orchestrator | 2025-07-12 15:54:35.004956 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 15:54:35.004977 | orchestrator | Saturday 12 July 2025 15:51:37 +0000 (0:00:00.196) 0:00:00.196 ********* 2025-07-12 15:54:35.004997 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:54:35.005063 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:54:35.005087 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:54:35.005098 | orchestrator | 2025-07-12 15:54:35.005109 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 15:54:35.005120 | orchestrator | Saturday 12 July 2025 15:51:37 +0000 (0:00:00.224) 0:00:00.421 ********* 2025-07-12 15:54:35.005131 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-07-12 15:54:35.005142 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-07-12 15:54:35.005153 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-07-12 15:54:35.005164 | orchestrator | 2025-07-12 15:54:35.005174 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-07-12 15:54:35.005185 | orchestrator | 2025-07-12 15:54:35.005196 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-07-12 15:54:35.005207 | orchestrator | Saturday 12 July 2025 15:51:37 +0000 (0:00:00.297) 0:00:00.719 ********* 2025-07-12 15:54:35.005217 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:54:35.005229 | orchestrator | 2025-07-12 15:54:35.005239 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-07-12 15:54:35.005250 | orchestrator | Saturday 12 July 2025 15:51:38 +0000 (0:00:00.545) 0:00:01.265 ********* 2025-07-12 15:54:35.005261 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-07-12 15:54:35.005271 | orchestrator | 2025-07-12 15:54:35.005282 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-07-12 15:54:35.005292 | orchestrator | Saturday 12 July 2025 15:51:42 +0000 (0:00:03.986) 0:00:05.251 ********* 2025-07-12 15:54:35.005303 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-07-12 15:54:35.005314 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-07-12 15:54:35.005325 | orchestrator | 2025-07-12 15:54:35.005335 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-07-12 15:54:35.005346 | orchestrator | Saturday 12 July 2025 15:51:49 +0000 (0:00:07.303) 0:00:12.555 ********* 2025-07-12 15:54:35.005356 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-07-12 15:54:35.005367 | orchestrator | 2025-07-12 15:54:35.005377 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-07-12 15:54:35.005388 | orchestrator | Saturday 12 July 2025 15:51:53 +0000 (0:00:03.756) 0:00:16.312 ********* 2025-07-12 15:54:35.005400 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 15:54:35.005413 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-07-12 15:54:35.005426 | orchestrator | 2025-07-12 15:54:35.005438 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-07-12 15:54:35.005451 | orchestrator | Saturday 12 July 2025 15:51:57 +0000 (0:00:04.180) 0:00:20.492 ********* 2025-07-12 15:54:35.005464 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 15:54:35.005477 | orchestrator | 2025-07-12 15:54:35.005490 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-07-12 15:54:35.005502 | orchestrator | Saturday 12 July 2025 15:52:01 +0000 (0:00:03.875) 0:00:24.368 ********* 2025-07-12 15:54:35.005514 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-07-12 15:54:35.005526 | orchestrator | 2025-07-12 15:54:35.005539 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-07-12 15:54:35.005552 | orchestrator | Saturday 12 July 2025 15:52:06 +0000 (0:00:04.691) 0:00:29.059 ********* 2025-07-12 15:54:35.005594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 15:54:35.005622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 15:54:35.005638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 15:54:35.005659 | orchestrator | 2025-07-12 15:54:35.005672 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-07-12 15:54:35.005685 | orchestrator | Saturday 12 July 2025 15:52:10 +0000 (0:00:04.516) 0:00:33.575 ********* 2025-07-12 15:54:35.005705 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:54:35.005718 | orchestrator | 2025-07-12 15:54:35.005730 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-07-12 15:54:35.005744 | orchestrator | Saturday 12 July 2025 15:52:11 +0000 (0:00:00.478) 0:00:34.054 ********* 2025-07-12 15:54:35.005757 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:54:35.005768 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:54:35.005806 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:54:35.005818 | orchestrator | 2025-07-12 15:54:35.005833 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-07-12 15:54:35.005844 | orchestrator | Saturday 12 July 2025 15:52:14 +0000 (0:00:03.732) 0:00:37.786 ********* 2025-07-12 15:54:35.005855 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 15:54:35.005866 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 15:54:35.005877 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 15:54:35.005887 | orchestrator | 2025-07-12 15:54:35.005898 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-07-12 15:54:35.005908 | orchestrator | Saturday 12 July 2025 15:52:16 +0000 (0:00:01.545) 0:00:39.332 ********* 2025-07-12 15:54:35.005919 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 15:54:35.005930 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 15:54:35.005941 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 15:54:35.005952 | orchestrator | 2025-07-12 15:54:35.005962 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-07-12 15:54:35.005973 | orchestrator | Saturday 12 July 2025 15:52:17 +0000 (0:00:01.226) 0:00:40.558 ********* 2025-07-12 15:54:35.005984 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:54:35.005994 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:54:35.006005 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:54:35.006061 | orchestrator | 2025-07-12 15:54:35.006085 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-07-12 15:54:35.006105 | orchestrator | Saturday 12 July 2025 15:52:18 +0000 (0:00:00.765) 0:00:41.324 ********* 2025-07-12 15:54:35.006124 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:54:35.006144 | orchestrator | 2025-07-12 15:54:35.006162 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-07-12 15:54:35.006180 | orchestrator | Saturday 12 July 2025 15:52:18 +0000 (0:00:00.183) 0:00:41.508 ********* 2025-07-12 15:54:35.006191 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:54:35.006211 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:54:35.006222 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:54:35.006232 | orchestrator | 2025-07-12 15:54:35.006243 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-07-12 15:54:35.006253 | orchestrator | Saturday 12 July 2025 15:52:19 +0000 (0:00:00.608) 0:00:42.117 ********* 2025-07-12 15:54:35.006264 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:54:35.006275 | orchestrator | 2025-07-12 15:54:35.006285 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-07-12 15:54:35.006296 | orchestrator | Saturday 12 July 2025 15:52:19 +0000 (0:00:00.758) 0:00:42.876 ********* 2025-07-12 15:54:35.006316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 15:54:35.006357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 15:54:35.006380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 15:54:35.006392 | orchestrator | 2025-07-12 15:54:35.006403 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-07-12 15:54:35.006414 | orchestrator | Saturday 12 July 2025 15:52:26 +0000 (0:00:06.688) 0:00:49.564 ********* 2025-07-12 15:54:35.006440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 15:54:35.006453 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:54:35.006465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 15:54:35.006483 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:54:35.006508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 15:54:35.006521 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:54:35.006532 | orchestrator | 2025-07-12 15:54:35.006543 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-07-12 15:54:35.006553 | orchestrator | Saturday 12 July 2025 15:52:29 +0000 (0:00:03.142) 0:00:52.707 ********* 2025-07-12 15:54:35.006565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 15:54:35.006583 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:54:35.006606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 15:54:35.006619 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:54:35.006630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 15:54:35.006654 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:54:35.006665 | orchestrator | 2025-07-12 15:54:35.006676 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-07-12 15:54:35.006687 | orchestrator | Saturday 12 July 2025 15:52:32 +0000 (0:00:03.054) 0:00:55.762 ********* 2025-07-12 15:54:35.006698 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:54:35.006708 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:54:35.006719 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:54:35.006730 | orchestrator | 2025-07-12 15:54:35.006740 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-07-12 15:54:35.006751 | orchestrator | Saturday 12 July 2025 15:52:36 +0000 (0:00:04.008) 0:00:59.770 ********* 2025-07-12 15:54:35.006770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 15:54:35.006817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 15:54:35.006838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 15:54:35.006851 | orchestrator | 2025-07-12 15:54:35.006862 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-07-12 15:54:35.006872 | orchestrator | Saturday 12 July 2025 15:52:42 +0000 (0:00:05.835) 0:01:05.606 ********* 2025-07-12 15:54:35.006883 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:54:35.006894 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:54:35.006904 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:54:35.006915 | orchestrator | 2025-07-12 15:54:35.006925 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-07-12 15:54:35.006942 | orchestrator | Saturday 12 July 2025 15:52:50 +0000 (0:00:07.932) 0:01:13.538 ********* 2025-07-12 15:54:35.006954 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:54:35.006964 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:54:35.006975 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:54:35.006985 | orchestrator | 2025-07-12 15:54:35.006996 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-07-12 15:54:35.007012 | orchestrator | Saturday 12 July 2025 15:52:57 +0000 (0:00:06.679) 0:01:20.218 ********* 2025-07-12 15:54:35.007029 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:54:35.007040 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:54:35.007050 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:54:35.007061 | orchestrator | 2025-07-12 15:54:35.007071 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-07-12 15:54:35.007082 | orchestrator | Saturday 12 July 2025 15:53:03 +0000 (0:00:05.766) 0:01:25.984 ********* 2025-07-12 15:54:35.007092 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:54:35.007102 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:54:35.007113 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:54:35.007123 | orchestrator | 2025-07-12 15:54:35.007134 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-07-12 15:54:35.007144 | orchestrator | Saturday 12 July 2025 15:53:06 +0000 (0:00:03.893) 0:01:29.878 ********* 2025-07-12 15:54:35.007169 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:54:35.007180 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:54:35.007191 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:54:35.007202 | orchestrator | 2025-07-12 15:54:35.007213 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-07-12 15:54:35.007223 | orchestrator | Saturday 12 July 2025 15:53:10 +0000 (0:00:03.527) 0:01:33.406 ********* 2025-07-12 15:54:35.007234 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:54:35.007245 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:54:35.007255 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:54:35.007266 | orchestrator | 2025-07-12 15:54:35.007277 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-07-12 15:54:35.007287 | orchestrator | Saturday 12 July 2025 15:53:10 +0000 (0:00:00.271) 0:01:33.677 ********* 2025-07-12 15:54:35.007298 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-07-12 15:54:35.007309 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:54:35.007320 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-07-12 15:54:35.007330 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:54:35.007341 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-07-12 15:54:35.007352 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:54:35.007362 | orchestrator | 2025-07-12 15:54:35.007373 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-07-12 15:54:35.007384 | orchestrator | Saturday 12 July 2025 15:53:13 +0000 (0:00:02.989) 0:01:36.667 ********* 2025-07-12 15:54:35.007395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 15:54:35.007429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 15:54:35.007443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 15:54:35.007455 | orchestrator | 2025-07-12 15:54:35.007466 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-07-12 15:54:35.007477 | orchestrator | Saturday 12 July 2025 15:53:17 +0000 (0:00:03.435) 0:01:40.103 ********* 2025-07-12 15:54:35.007494 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:54:35.007504 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:54:35.007515 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:54:35.007526 | orchestrator | 2025-07-12 15:54:35.007536 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-07-12 15:54:35.007547 | orchestrator | Saturday 12 July 2025 15:53:17 +0000 (0:00:00.240) 0:01:40.344 ********* 2025-07-12 15:54:35.007557 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:54:35.007568 | orchestrator | 2025-07-12 15:54:35.007579 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-07-12 15:54:35.007589 | orchestrator | Saturday 12 July 2025 15:53:19 +0000 (0:00:02.255) 0:01:42.599 ********* 2025-07-12 15:54:35.007600 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:54:35.007611 | orchestrator | 2025-07-12 15:54:35.007621 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-07-12 15:54:35.007632 | orchestrator | Saturday 12 July 2025 15:53:21 +0000 (0:00:01.861) 0:01:44.461 ********* 2025-07-12 15:54:35.007643 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:54:35.007653 | orchestrator | 2025-07-12 15:54:35.007664 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-07-12 15:54:35.007680 | orchestrator | Saturday 12 July 2025 15:53:23 +0000 (0:00:01.961) 0:01:46.423 ********* 2025-07-12 15:54:35.007692 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:54:35.007702 | orchestrator | 2025-07-12 15:54:35.007713 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-07-12 15:54:35.007724 | orchestrator | Saturday 12 July 2025 15:53:53 +0000 (0:00:29.829) 0:02:16.252 ********* 2025-07-12 15:54:35.007735 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:54:35.007745 | orchestrator | 2025-07-12 15:54:35.007761 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-07-12 15:54:35.007773 | orchestrator | Saturday 12 July 2025 15:53:56 +0000 (0:00:02.940) 0:02:19.192 ********* 2025-07-12 15:54:35.007833 | orchestrator | 2025-07-12 15:54:35.007845 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-07-12 15:54:35.007855 | orchestrator | Saturday 12 July 2025 15:53:56 +0000 (0:00:00.059) 0:02:19.252 ********* 2025-07-12 15:54:35.007866 | orchestrator | 2025-07-12 15:54:35.007877 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-07-12 15:54:35.007887 | orchestrator | Saturday 12 July 2025 15:53:56 +0000 (0:00:00.094) 0:02:19.346 ********* 2025-07-12 15:54:35.007898 | orchestrator | 2025-07-12 15:54:35.007908 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-07-12 15:54:35.007919 | orchestrator | Saturday 12 July 2025 15:53:56 +0000 (0:00:00.081) 0:02:19.428 ********* 2025-07-12 15:54:35.007930 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:54:35.007940 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:54:35.007951 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:54:35.007962 | orchestrator | 2025-07-12 15:54:35.007972 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:54:35.007983 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-07-12 15:54:35.007995 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-12 15:54:35.008006 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-12 15:54:35.008017 | orchestrator | 2025-07-12 15:54:35.008027 | orchestrator | 2025-07-12 15:54:35.008038 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:54:35.008049 | orchestrator | Saturday 12 July 2025 15:54:32 +0000 (0:00:35.890) 0:02:55.318 ********* 2025-07-12 15:54:35.008059 | orchestrator | =============================================================================== 2025-07-12 15:54:35.008070 | orchestrator | glance : Restart glance-api container ---------------------------------- 35.89s 2025-07-12 15:54:35.008087 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 29.83s 2025-07-12 15:54:35.008098 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 7.93s 2025-07-12 15:54:35.008109 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.30s 2025-07-12 15:54:35.008119 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 6.69s 2025-07-12 15:54:35.008130 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 6.68s 2025-07-12 15:54:35.008140 | orchestrator | glance : Copying over config.json files for services -------------------- 5.84s 2025-07-12 15:54:35.008151 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 5.77s 2025-07-12 15:54:35.008161 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.69s 2025-07-12 15:54:35.008172 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.52s 2025-07-12 15:54:35.008183 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.18s 2025-07-12 15:54:35.008193 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.01s 2025-07-12 15:54:35.008204 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.99s 2025-07-12 15:54:35.008215 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.89s 2025-07-12 15:54:35.008225 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.88s 2025-07-12 15:54:35.008236 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.76s 2025-07-12 15:54:35.008247 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.73s 2025-07-12 15:54:35.008257 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.53s 2025-07-12 15:54:35.008268 | orchestrator | glance : Check glance containers ---------------------------------------- 3.44s 2025-07-12 15:54:35.008278 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.14s 2025-07-12 15:54:35.008289 | orchestrator | 2025-07-12 15:54:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:54:38.048511 | orchestrator | 2025-07-12 15:54:38 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:54:38.049045 | orchestrator | 2025-07-12 15:54:38 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:54:38.051396 | orchestrator | 2025-07-12 15:54:38 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:54:38.054713 | orchestrator | 2025-07-12 15:54:38 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state STARTED 2025-07-12 15:54:38.055165 | orchestrator | 2025-07-12 15:54:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:54:41.095092 | orchestrator | 2025-07-12 15:54:41 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:54:41.099401 | orchestrator | 2025-07-12 15:54:41 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:54:41.100644 | orchestrator | 2025-07-12 15:54:41 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:54:41.105973 | orchestrator | 2025-07-12 15:54:41 | INFO  | Task 42c3f6a4-cd52-46ec-b547-f5719e1f5f06 is in state SUCCESS 2025-07-12 15:54:41.107518 | orchestrator | 2025-07-12 15:54:41.107549 | orchestrator | 2025-07-12 15:54:41.107561 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 15:54:41.107573 | orchestrator | 2025-07-12 15:54:41.107584 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 15:54:41.107596 | orchestrator | Saturday 12 July 2025 15:51:30 +0000 (0:00:00.274) 0:00:00.274 ********* 2025-07-12 15:54:41.107607 | orchestrator | ok: [testbed-manager] 2025-07-12 15:54:41.107619 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:54:41.107654 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:54:41.107666 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:54:41.107676 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:54:41.107687 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:54:41.107697 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:54:41.107708 | orchestrator | 2025-07-12 15:54:41.107718 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 15:54:41.107729 | orchestrator | Saturday 12 July 2025 15:51:31 +0000 (0:00:00.873) 0:00:01.147 ********* 2025-07-12 15:54:41.107740 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-07-12 15:54:41.107752 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-07-12 15:54:41.107762 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-07-12 15:54:41.107802 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-07-12 15:54:41.107814 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-07-12 15:54:41.107825 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-07-12 15:54:41.107836 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-07-12 15:54:41.107846 | orchestrator | 2025-07-12 15:54:41.107857 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-07-12 15:54:41.107868 | orchestrator | 2025-07-12 15:54:41.107879 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-07-12 15:54:41.107890 | orchestrator | Saturday 12 July 2025 15:51:32 +0000 (0:00:00.738) 0:00:01.886 ********* 2025-07-12 15:54:41.107901 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:54:41.107914 | orchestrator | 2025-07-12 15:54:41.107924 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-07-12 15:54:41.107935 | orchestrator | Saturday 12 July 2025 15:51:33 +0000 (0:00:01.679) 0:00:03.565 ********* 2025-07-12 15:54:41.107951 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-12 15:54:41.107967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 15:54:41.107981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 15:54:41.108002 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 15:54:41.108035 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 15:54:41.108049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.108061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.108072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 15:54:41.108083 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 15:54:41.108095 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.108107 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 15:54:41.108136 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.108149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.108165 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-12 15:54:41.108179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.108190 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.108202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.108220 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.108242 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.108255 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.108266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.108278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.108290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.108301 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.108313 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.108337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.108357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.108369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.108381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.108392 | orchestrator | 2025-07-12 15:54:41.108403 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-07-12 15:54:41.108414 | orchestrator | Saturday 12 July 2025 15:51:37 +0000 (0:00:03.957) 0:00:07.523 ********* 2025-07-12 15:54:41.108426 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:54:41.108437 | orchestrator | 2025-07-12 15:54:41.108448 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-07-12 15:54:41.108459 | orchestrator | Saturday 12 July 2025 15:51:38 +0000 (0:00:01.187) 0:00:08.711 ********* 2025-07-12 15:54:41.108470 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-12 15:54:41.108490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 15:54:41.108501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 15:54:41.108524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 15:54:41.108536 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 15:54:41.108547 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 15:54:41.108559 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 15:54:41.108570 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 15:54:41.108581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.108599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.108610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.108634 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.108646 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.108658 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.108669 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.108680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.108709 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-12 15:54:41.108726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.108754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.108766 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.108799 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.108811 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.108830 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.108841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.108853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.108877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.108889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.108902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.108913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.108924 | orchestrator | 2025-07-12 15:54:41.108936 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-07-12 15:54:41.108954 | orchestrator | Saturday 12 July 2025 15:51:44 +0000 (0:00:06.095) 0:00:14.806 ********* 2025-07-12 15:54:41.108966 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-12 15:54:41.108977 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 15:54:41.108989 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 15:54:41.109014 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-12 15:54:41.109027 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:54:41.109039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 15:54:41.109058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:54:41.109070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:54:41.109081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 15:54:41.109093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:54:41.109109 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:54:41.109128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 15:54:41.109141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:54:41.109152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:54:41.109170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 15:54:41.109182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:54:41.109193 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 15:54:41.109205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 15:54:41.109222 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 15:54:41.109240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 15:54:41.109252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:54:41.109264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:54:41.109285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 15:54:41.109296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:54:41.109308 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:54:41.109319 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:54:41.109330 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:54:41.109341 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:54:41.109353 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 15:54:41.109369 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 15:54:41.109389 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 15:54:41.109401 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:54:41.109412 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 15:54:41.109431 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 15:54:41.109442 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 15:54:41.109454 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:54:41.109464 | orchestrator | 2025-07-12 15:54:41.109476 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-07-12 15:54:41.109487 | orchestrator | Saturday 12 July 2025 15:51:46 +0000 (0:00:01.667) 0:00:16.474 ********* 2025-07-12 15:54:41.109499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 15:54:41.109510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:54:41.109527 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-12 15:54:41.109546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:54:41.109558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 15:54:41.109577 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 15:54:41.109589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:54:41.109600 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 15:54:41.109613 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-12 15:54:41.109638 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:54:41.109650 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:54:41.109661 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:54:41.109673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 15:54:41.109691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:54:41.109703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:54:41.109714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 15:54:41.109726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:54:41.109738 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:54:41.109749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 15:54:41.109760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:54:41.109799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:54:41.109819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 15:54:41.109830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 15:54:41.109842 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:54:41.109853 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 15:54:41.109864 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 15:54:41.109876 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 15:54:41.109887 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:54:41.109898 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 15:54:41.109916 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 15:54:41.109935 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 15:54:41.109947 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:54:41.109958 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 15:54:41.110002 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 15:54:41.110083 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 15:54:41.110098 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:54:41.110109 | orchestrator | 2025-07-12 15:54:41.110120 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-07-12 15:54:41.110131 | orchestrator | Saturday 12 July 2025 15:51:48 +0000 (0:00:01.908) 0:00:18.383 ********* 2025-07-12 15:54:41.110143 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-12 15:54:41.110155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 15:54:41.110187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 15:54:41.110199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 15:54:41.110211 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 15:54:41.110223 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 15:54:41.110234 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 15:54:41.110246 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 15:54:41.110258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.110276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.110300 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.110313 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.110324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.110335 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.110347 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.110359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.110376 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-12 15:54:41.110403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.110415 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.110427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.110438 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.110450 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.110462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.110480 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.110502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.110514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.110526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.110538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.110549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.110561 | orchestrator | 2025-07-12 15:54:41.110572 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-07-12 15:54:41.110583 | orchestrator | Saturday 12 July 2025 15:51:55 +0000 (0:00:06.469) 0:00:24.853 ********* 2025-07-12 15:54:41.110594 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 15:54:41.110606 | orchestrator | 2025-07-12 15:54:41.110617 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-07-12 15:54:41.110628 | orchestrator | Saturday 12 July 2025 15:51:55 +0000 (0:00:00.840) 0:00:25.693 ********* 2025-07-12 15:54:41.110648 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102380, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5373926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.110660 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102380, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5373926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.110687 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102380, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5373926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.110700 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102380, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5373926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.110711 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102380, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5373926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 15:54:41.110723 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102380, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5373926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.110734 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1102372, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5333924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.110753 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1102372, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5333924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.110765 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102380, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5373926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.110821 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1102372, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5333924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.110834 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1102372, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5333924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.110846 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1102372, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5333924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.110858 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1102077, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4533916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.110870 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1102077, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4533916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.110889 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1102077, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4533916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.110900 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1102372, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5333924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.110925 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1102078, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4533916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'x2025-07-12 15:54:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:54:41.110937 | orchestrator | oth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.110949 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1102077, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4533916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.110961 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1102078, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4533916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.110972 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1102077, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4533916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.110990 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1102372, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5333924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 15:54:41.111002 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1102078, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4533916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111013 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1102078, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4533916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111037 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1102368, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5313926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111049 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1102077, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4533916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111060 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1102368, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5313926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111072 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1102078, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4533916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111090 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1102368, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5313926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111102 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1102368, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5313926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111113 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1102078, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4533916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111136 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1102080, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4553914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111148 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1102080, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4553914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111160 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1102080, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4553914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111171 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1102368, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5313926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111189 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1102077, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4533916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 15:54:41.111200 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1102080, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4553914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111212 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1102086, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4563916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111235 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1102368, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5313926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111247 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1102086, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4563916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111259 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1102086, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4563916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111276 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1102086, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4563916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111288 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1102080, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4553914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111315 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1102373, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5343926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111327 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1102080, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4553914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111349 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1102373, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5343926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111362 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1102086, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4563916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111373 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1102373, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5343926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111391 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1102373, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5343926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111402 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1102086, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4563916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111414 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1102379, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5353925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111425 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1102373, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5343926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111447 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1102379, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5353925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111459 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1102078, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4533916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 15:54:41.111471 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1102379, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5353925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111492 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1102379, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5353925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111503 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1102379, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5353925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111515 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1102391, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5403926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111526 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1102373, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5343926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111556 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1102391, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5403926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111568 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1102391, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5403926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111580 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1102379, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5353925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111598 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1102391, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5403926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111610 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1102391, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5403926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111621 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1102376, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5343926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111633 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1102376, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5343926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111655 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1102391, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5403926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111667 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1102376, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5343926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111685 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1102376, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5343926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111696 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1102368, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5313926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 15:54:41.111708 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1102376, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5343926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111719 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102079, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4543915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111730 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102079, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4543915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111747 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1102376, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5343926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111766 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102079, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4543915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111805 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102079, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4543915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111817 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1102085, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4563916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111828 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102079, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4543915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111840 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102079, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4543915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111851 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1102085, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4563916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111868 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1102085, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4563916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111886 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1102080, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4553914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 15:54:41.111905 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102073, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4523914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111916 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1102085, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4563916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111928 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102073, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4523914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111939 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1102085, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4563916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111951 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1102085, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4563916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111967 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1102370, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5313926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.111991 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102073, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4523914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.112004 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102073, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4523914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.112015 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1102370, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5313926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.112027 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1102370, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5313926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.112038 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1102390, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5393927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.112050 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102073, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4523914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.112066 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102073, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4523914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.112092 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1102370, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5313926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.112103 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1102086, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4563916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 15:54:41.112115 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1102390, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5393927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.112126 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1102390, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5393927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.112138 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1102084, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4553914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.112150 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1102370, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5313926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.112165 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1102370, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5313926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.112191 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1102084, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4553914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.112203 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1102390, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5393927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.112214 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1102084, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4553914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.112226 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1102373, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5343926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 15:54:41.112237 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1102390, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5393927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.112249 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1102383, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5373926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.112267 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1102390, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5393927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.112285 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:54:41.112304 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1102383, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5373926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.112315 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:54:41.112327 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1102383, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5373926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.112338 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:54:41.112349 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1102084, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4553914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.112361 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1102084, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4553914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.112372 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1102084, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4553914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.112384 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1102383, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5373926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.112400 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:54:41.112422 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1102383, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5373926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.112434 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:54:41.112446 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1102383, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5373926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 15:54:41.112457 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:54:41.112468 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1102379, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5353925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 15:54:41.112480 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1102391, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5403926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 15:54:41.112491 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1102376, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5343926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 15:54:41.112503 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102079, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4543915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 15:54:41.112521 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1102085, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4563916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 15:54:41.112543 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102073, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4523914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 15:54:41.112556 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1102370, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5313926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 15:54:41.112567 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1102390, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5393927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 15:54:41.112578 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1102084, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4553914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 15:54:41.112590 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1102383, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.5373926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 15:54:41.112601 | orchestrator | 2025-07-12 15:54:41.112612 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-07-12 15:54:41.112624 | orchestrator | Saturday 12 July 2025 15:52:19 +0000 (0:00:23.172) 0:00:48.865 ********* 2025-07-12 15:54:41.112635 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 15:54:41.112652 | orchestrator | 2025-07-12 15:54:41.112663 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-07-12 15:54:41.112674 | orchestrator | Saturday 12 July 2025 15:52:19 +0000 (0:00:00.839) 0:00:49.705 ********* 2025-07-12 15:54:41.112685 | orchestrator | [WARNING]: Skipped 2025-07-12 15:54:41.112697 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 15:54:41.112707 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-07-12 15:54:41.112718 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 15:54:41.112729 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-07-12 15:54:41.112739 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-07-12 15:54:41.112750 | orchestrator | [WARNING]: Skipped 2025-07-12 15:54:41.112761 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 15:54:41.112772 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-07-12 15:54:41.112802 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 15:54:41.112813 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-07-12 15:54:41.112824 | orchestrator | [WARNING]: Skipped 2025-07-12 15:54:41.112839 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 15:54:41.112858 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-07-12 15:54:41.112874 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 15:54:41.112885 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-07-12 15:54:41.112895 | orchestrator | [WARNING]: Skipped 2025-07-12 15:54:41.112911 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 15:54:41.112922 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-07-12 15:54:41.112933 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 15:54:41.112951 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-07-12 15:54:41.112962 | orchestrator | [WARNING]: Skipped 2025-07-12 15:54:41.112973 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 15:54:41.112984 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-07-12 15:54:41.112995 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 15:54:41.113005 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-07-12 15:54:41.113016 | orchestrator | [WARNING]: Skipped 2025-07-12 15:54:41.113027 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 15:54:41.113037 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-07-12 15:54:41.113048 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 15:54:41.113059 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-07-12 15:54:41.113069 | orchestrator | [WARNING]: Skipped 2025-07-12 15:54:41.113080 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 15:54:41.113090 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-07-12 15:54:41.113101 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 15:54:41.113112 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-07-12 15:54:41.113123 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 15:54:41.113133 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 15:54:41.113144 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-07-12 15:54:41.113155 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-12 15:54:41.113166 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-12 15:54:41.113176 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-12 15:54:41.113191 | orchestrator | 2025-07-12 15:54:41.113208 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-07-12 15:54:41.113226 | orchestrator | Saturday 12 July 2025 15:52:22 +0000 (0:00:02.852) 0:00:52.558 ********* 2025-07-12 15:54:41.113237 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-12 15:54:41.113248 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:54:41.113259 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-12 15:54:41.113270 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-12 15:54:41.113281 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-12 15:54:41.113291 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:54:41.113302 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:54:41.113313 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:54:41.113323 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-12 15:54:41.113334 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:54:41.113344 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-12 15:54:41.113355 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:54:41.113365 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-07-12 15:54:41.113376 | orchestrator | 2025-07-12 15:54:41.113387 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-07-12 15:54:41.113397 | orchestrator | Saturday 12 July 2025 15:52:39 +0000 (0:00:16.834) 0:01:09.392 ********* 2025-07-12 15:54:41.113408 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-12 15:54:41.113418 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:54:41.113429 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-12 15:54:41.113439 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:54:41.113450 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-12 15:54:41.113461 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:54:41.113471 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-12 15:54:41.113482 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:54:41.113492 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-12 15:54:41.113505 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:54:41.113524 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-12 15:54:41.113535 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:54:41.113545 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-07-12 15:54:41.113556 | orchestrator | 2025-07-12 15:54:41.113567 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-07-12 15:54:41.113577 | orchestrator | Saturday 12 July 2025 15:52:42 +0000 (0:00:03.221) 0:01:12.614 ********* 2025-07-12 15:54:41.113589 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-12 15:54:41.113608 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:54:41.113632 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-07-12 15:54:41.113650 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-12 15:54:41.113662 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:54:41.113673 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-12 15:54:41.113690 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:54:41.113701 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-12 15:54:41.113712 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:54:41.113722 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-12 15:54:41.113733 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:54:41.113744 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-12 15:54:41.113754 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:54:41.113765 | orchestrator | 2025-07-12 15:54:41.113848 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-07-12 15:54:41.113863 | orchestrator | Saturday 12 July 2025 15:52:45 +0000 (0:00:02.328) 0:01:14.942 ********* 2025-07-12 15:54:41.113874 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 15:54:41.113885 | orchestrator | 2025-07-12 15:54:41.113896 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-07-12 15:54:41.113906 | orchestrator | Saturday 12 July 2025 15:52:46 +0000 (0:00:01.308) 0:01:16.250 ********* 2025-07-12 15:54:41.113917 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:54:41.113928 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:54:41.113939 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:54:41.113950 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:54:41.113961 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:54:41.113971 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:54:41.113982 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:54:41.113993 | orchestrator | 2025-07-12 15:54:41.114004 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-07-12 15:54:41.114014 | orchestrator | Saturday 12 July 2025 15:52:47 +0000 (0:00:00.962) 0:01:17.212 ********* 2025-07-12 15:54:41.114069 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:54:41.114085 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:54:41.114101 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:54:41.114112 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:54:41.114125 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:54:41.114143 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:54:41.114154 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:54:41.114165 | orchestrator | 2025-07-12 15:54:41.114175 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-07-12 15:54:41.114186 | orchestrator | Saturday 12 July 2025 15:52:49 +0000 (0:00:02.587) 0:01:19.800 ********* 2025-07-12 15:54:41.114197 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-12 15:54:41.114207 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-12 15:54:41.114218 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-12 15:54:41.114229 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:54:41.114241 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:54:41.114260 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:54:41.114271 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-12 15:54:41.114281 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:54:41.114291 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-12 15:54:41.114301 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-12 15:54:41.114310 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:54:41.114319 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:54:41.114329 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-12 15:54:41.114346 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:54:41.114355 | orchestrator | 2025-07-12 15:54:41.114365 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-07-12 15:54:41.114374 | orchestrator | Saturday 12 July 2025 15:52:51 +0000 (0:00:01.969) 0:01:21.769 ********* 2025-07-12 15:54:41.114384 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-12 15:54:41.114393 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:54:41.114403 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-12 15:54:41.114412 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:54:41.114421 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-07-12 15:54:41.114431 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-12 15:54:41.114440 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:54:41.114450 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-12 15:54:41.114459 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:54:41.114474 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-12 15:54:41.114484 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:54:41.114500 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-12 15:54:41.114510 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:54:41.114519 | orchestrator | 2025-07-12 15:54:41.114529 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-07-12 15:54:41.114538 | orchestrator | Saturday 12 July 2025 15:52:55 +0000 (0:00:03.335) 0:01:25.104 ********* 2025-07-12 15:54:41.114548 | orchestrator | [WARNING]: Skipped 2025-07-12 15:54:41.114557 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-07-12 15:54:41.114567 | orchestrator | due to this access issue: 2025-07-12 15:54:41.114576 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-07-12 15:54:41.114586 | orchestrator | not a directory 2025-07-12 15:54:41.114595 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 15:54:41.114604 | orchestrator | 2025-07-12 15:54:41.114614 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-07-12 15:54:41.114623 | orchestrator | Saturday 12 July 2025 15:52:57 +0000 (0:00:01.903) 0:01:27.008 ********* 2025-07-12 15:54:41.114632 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:54:41.114642 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:54:41.114651 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:54:41.114660 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:54:41.114670 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:54:41.114679 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:54:41.114688 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:54:41.114698 | orchestrator | 2025-07-12 15:54:41.114707 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-07-12 15:54:41.114717 | orchestrator | Saturday 12 July 2025 15:52:58 +0000 (0:00:01.429) 0:01:28.437 ********* 2025-07-12 15:54:41.114726 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:54:41.114735 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:54:41.114744 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:54:41.114754 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:54:41.114763 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:54:41.114772 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:54:41.114829 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:54:41.114839 | orchestrator | 2025-07-12 15:54:41.114848 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-07-12 15:54:41.114864 | orchestrator | Saturday 12 July 2025 15:53:00 +0000 (0:00:01.416) 0:01:29.854 ********* 2025-07-12 15:54:41.114875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 15:54:41.114885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 15:54:41.114896 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-12 15:54:41.114911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 15:54:41.114929 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 15:54:41.114940 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 15:54:41.114950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.114974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.114989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.115000 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 15:54:41.115017 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.115032 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 15:54:41.115046 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.115055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.115068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.115077 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-12 15:54:41.115086 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.115095 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.115112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.115121 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.115129 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.115143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.115151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.115159 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.115168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.115179 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 15:54:41.115194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.115202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.115216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 15:54:41.115224 | orchestrator | 2025-07-12 15:54:41.115232 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-07-12 15:54:41.115240 | orchestrator | Saturday 12 July 2025 15:53:04 +0000 (0:00:04.661) 0:01:34.516 ********* 2025-07-12 15:54:41.115248 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-07-12 15:54:41.115256 | orchestrator | skipping: [testbed-manager] 2025-07-12 15:54:41.115264 | orchestrator | 2025-07-12 15:54:41.115272 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-12 15:54:41.115279 | orchestrator | Saturday 12 July 2025 15:53:05 +0000 (0:00:01.010) 0:01:35.526 ********* 2025-07-12 15:54:41.115287 | orchestrator | 2025-07-12 15:54:41.115295 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-12 15:54:41.115302 | orchestrator | Saturday 12 July 2025 15:53:06 +0000 (0:00:00.375) 0:01:35.901 ********* 2025-07-12 15:54:41.115310 | orchestrator | 2025-07-12 15:54:41.115318 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-12 15:54:41.115325 | orchestrator | Saturday 12 July 2025 15:53:06 +0000 (0:00:00.125) 0:01:36.027 ********* 2025-07-12 15:54:41.115333 | orchestrator | 2025-07-12 15:54:41.115341 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-12 15:54:41.115348 | orchestrator | Saturday 12 July 2025 15:53:06 +0000 (0:00:00.120) 0:01:36.148 ********* 2025-07-12 15:54:41.115356 | orchestrator | 2025-07-12 15:54:41.115364 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-12 15:54:41.115371 | orchestrator | Saturday 12 July 2025 15:53:06 +0000 (0:00:00.102) 0:01:36.251 ********* 2025-07-12 15:54:41.115379 | orchestrator | 2025-07-12 15:54:41.115387 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-12 15:54:41.115395 | orchestrator | Saturday 12 July 2025 15:53:06 +0000 (0:00:00.048) 0:01:36.299 ********* 2025-07-12 15:54:41.115402 | orchestrator | 2025-07-12 15:54:41.115410 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-12 15:54:41.115417 | orchestrator | Saturday 12 July 2025 15:53:06 +0000 (0:00:00.053) 0:01:36.353 ********* 2025-07-12 15:54:41.115425 | orchestrator | 2025-07-12 15:54:41.115433 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-07-12 15:54:41.115440 | orchestrator | Saturday 12 July 2025 15:53:06 +0000 (0:00:00.069) 0:01:36.422 ********* 2025-07-12 15:54:41.115448 | orchestrator | changed: [testbed-manager] 2025-07-12 15:54:41.115455 | orchestrator | 2025-07-12 15:54:41.115463 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-07-12 15:54:41.115471 | orchestrator | Saturday 12 July 2025 15:53:24 +0000 (0:00:18.201) 0:01:54.624 ********* 2025-07-12 15:54:41.115478 | orchestrator | changed: [testbed-manager] 2025-07-12 15:54:41.115486 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:54:41.115494 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:54:41.115502 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:54:41.115509 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:54:41.115517 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:54:41.115524 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:54:41.115532 | orchestrator | 2025-07-12 15:54:41.115540 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-07-12 15:54:41.115552 | orchestrator | Saturday 12 July 2025 15:53:38 +0000 (0:00:13.695) 0:02:08.320 ********* 2025-07-12 15:54:41.115560 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:54:41.115568 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:54:41.115575 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:54:41.115583 | orchestrator | 2025-07-12 15:54:41.115591 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-07-12 15:54:41.115599 | orchestrator | Saturday 12 July 2025 15:53:43 +0000 (0:00:05.425) 0:02:13.745 ********* 2025-07-12 15:54:41.115610 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:54:41.115618 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:54:41.115625 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:54:41.115633 | orchestrator | 2025-07-12 15:54:41.115641 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-07-12 15:54:41.115654 | orchestrator | Saturday 12 July 2025 15:53:49 +0000 (0:00:05.230) 0:02:18.976 ********* 2025-07-12 15:54:41.115662 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:54:41.115669 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:54:41.115677 | orchestrator | changed: [testbed-manager] 2025-07-12 15:54:41.115685 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:54:41.115693 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:54:41.115700 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:54:41.115708 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:54:41.115715 | orchestrator | 2025-07-12 15:54:41.115723 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-07-12 15:54:41.115731 | orchestrator | Saturday 12 July 2025 15:54:01 +0000 (0:00:12.638) 0:02:31.614 ********* 2025-07-12 15:54:41.115739 | orchestrator | changed: [testbed-manager] 2025-07-12 15:54:41.115746 | orchestrator | 2025-07-12 15:54:41.115754 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-07-12 15:54:41.115762 | orchestrator | Saturday 12 July 2025 15:54:07 +0000 (0:00:06.104) 0:02:37.719 ********* 2025-07-12 15:54:41.115769 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:54:41.115790 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:54:41.115798 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:54:41.115806 | orchestrator | 2025-07-12 15:54:41.115814 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-07-12 15:54:41.115822 | orchestrator | Saturday 12 July 2025 15:54:19 +0000 (0:00:11.689) 0:02:49.409 ********* 2025-07-12 15:54:41.115830 | orchestrator | changed: [testbed-manager] 2025-07-12 15:54:41.115837 | orchestrator | 2025-07-12 15:54:41.115845 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-07-12 15:54:41.115853 | orchestrator | Saturday 12 July 2025 15:54:29 +0000 (0:00:09.980) 0:02:59.389 ********* 2025-07-12 15:54:41.115860 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:54:41.115868 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:54:41.115875 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:54:41.115883 | orchestrator | 2025-07-12 15:54:41.115891 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:54:41.115899 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-12 15:54:41.115907 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-12 15:54:41.115915 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-12 15:54:41.115923 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-12 15:54:41.115931 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-07-12 15:54:41.115945 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-07-12 15:54:41.115953 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-07-12 15:54:41.115960 | orchestrator | 2025-07-12 15:54:41.115968 | orchestrator | 2025-07-12 15:54:41.115976 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:54:41.115984 | orchestrator | Saturday 12 July 2025 15:54:40 +0000 (0:00:10.621) 0:03:10.011 ********* 2025-07-12 15:54:41.115992 | orchestrator | =============================================================================== 2025-07-12 15:54:41.116000 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 23.17s 2025-07-12 15:54:41.116007 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 18.20s 2025-07-12 15:54:41.116015 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 16.83s 2025-07-12 15:54:41.116023 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.70s 2025-07-12 15:54:41.116030 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 12.64s 2025-07-12 15:54:41.116038 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 11.69s 2025-07-12 15:54:41.116046 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.62s 2025-07-12 15:54:41.116054 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 9.98s 2025-07-12 15:54:41.116061 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.47s 2025-07-12 15:54:41.116069 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 6.10s 2025-07-12 15:54:41.116077 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.10s 2025-07-12 15:54:41.116084 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.43s 2025-07-12 15:54:41.116092 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 5.23s 2025-07-12 15:54:41.116100 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.66s 2025-07-12 15:54:41.116111 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.96s 2025-07-12 15:54:41.116119 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 3.34s 2025-07-12 15:54:41.116127 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.22s 2025-07-12 15:54:41.116139 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.85s 2025-07-12 15:54:41.116147 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.59s 2025-07-12 15:54:41.116155 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.33s 2025-07-12 15:54:44.148743 | orchestrator | 2025-07-12 15:54:44 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:54:44.149197 | orchestrator | 2025-07-12 15:54:44 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:54:44.149587 | orchestrator | 2025-07-12 15:54:44 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:54:44.150493 | orchestrator | 2025-07-12 15:54:44 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state STARTED 2025-07-12 15:54:44.150542 | orchestrator | 2025-07-12 15:54:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:54:47.187279 | orchestrator | 2025-07-12 15:54:47 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:54:47.187988 | orchestrator | 2025-07-12 15:54:47 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:54:47.189243 | orchestrator | 2025-07-12 15:54:47 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:54:47.190204 | orchestrator | 2025-07-12 15:54:47 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state STARTED 2025-07-12 15:54:47.190238 | orchestrator | 2025-07-12 15:54:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:54:50.234612 | orchestrator | 2025-07-12 15:54:50 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:54:50.236949 | orchestrator | 2025-07-12 15:54:50 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:54:50.238321 | orchestrator | 2025-07-12 15:54:50 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:54:50.240079 | orchestrator | 2025-07-12 15:54:50 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state STARTED 2025-07-12 15:54:50.240118 | orchestrator | 2025-07-12 15:54:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:54:53.289526 | orchestrator | 2025-07-12 15:54:53 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:54:53.291092 | orchestrator | 2025-07-12 15:54:53 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:54:53.292713 | orchestrator | 2025-07-12 15:54:53 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:54:53.294089 | orchestrator | 2025-07-12 15:54:53 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state STARTED 2025-07-12 15:54:53.294119 | orchestrator | 2025-07-12 15:54:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:54:56.339732 | orchestrator | 2025-07-12 15:54:56 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:54:56.341539 | orchestrator | 2025-07-12 15:54:56 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:54:56.342430 | orchestrator | 2025-07-12 15:54:56 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:54:56.344670 | orchestrator | 2025-07-12 15:54:56 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state STARTED 2025-07-12 15:54:56.344905 | orchestrator | 2025-07-12 15:54:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:54:59.396989 | orchestrator | 2025-07-12 15:54:59 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:54:59.397812 | orchestrator | 2025-07-12 15:54:59 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:54:59.399266 | orchestrator | 2025-07-12 15:54:59 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:54:59.400387 | orchestrator | 2025-07-12 15:54:59 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state STARTED 2025-07-12 15:54:59.400410 | orchestrator | 2025-07-12 15:54:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:55:02.443571 | orchestrator | 2025-07-12 15:55:02 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:55:02.445227 | orchestrator | 2025-07-12 15:55:02 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:55:02.448873 | orchestrator | 2025-07-12 15:55:02 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:55:02.450905 | orchestrator | 2025-07-12 15:55:02 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state STARTED 2025-07-12 15:55:02.450934 | orchestrator | 2025-07-12 15:55:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:55:05.495006 | orchestrator | 2025-07-12 15:55:05 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:55:05.497677 | orchestrator | 2025-07-12 15:55:05 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:55:05.499298 | orchestrator | 2025-07-12 15:55:05 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:55:05.501412 | orchestrator | 2025-07-12 15:55:05 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state STARTED 2025-07-12 15:55:05.501444 | orchestrator | 2025-07-12 15:55:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:55:08.545816 | orchestrator | 2025-07-12 15:55:08 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:55:08.549215 | orchestrator | 2025-07-12 15:55:08 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:55:08.551855 | orchestrator | 2025-07-12 15:55:08 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:55:08.553692 | orchestrator | 2025-07-12 15:55:08 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state STARTED 2025-07-12 15:55:08.553719 | orchestrator | 2025-07-12 15:55:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:55:11.608352 | orchestrator | 2025-07-12 15:55:11 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:55:11.609809 | orchestrator | 2025-07-12 15:55:11 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:55:11.612503 | orchestrator | 2025-07-12 15:55:11 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:55:11.614689 | orchestrator | 2025-07-12 15:55:11 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state STARTED 2025-07-12 15:55:11.615399 | orchestrator | 2025-07-12 15:55:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:55:14.651086 | orchestrator | 2025-07-12 15:55:14 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:55:14.651810 | orchestrator | 2025-07-12 15:55:14 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:55:14.652610 | orchestrator | 2025-07-12 15:55:14 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:55:14.653482 | orchestrator | 2025-07-12 15:55:14 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state STARTED 2025-07-12 15:55:14.653731 | orchestrator | 2025-07-12 15:55:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:55:17.686422 | orchestrator | 2025-07-12 15:55:17 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:55:17.686508 | orchestrator | 2025-07-12 15:55:17 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:55:17.686524 | orchestrator | 2025-07-12 15:55:17 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:55:17.686955 | orchestrator | 2025-07-12 15:55:17 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state STARTED 2025-07-12 15:55:17.686981 | orchestrator | 2025-07-12 15:55:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:55:20.728672 | orchestrator | 2025-07-12 15:55:20 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:55:20.729670 | orchestrator | 2025-07-12 15:55:20 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:55:20.730992 | orchestrator | 2025-07-12 15:55:20 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:55:20.732394 | orchestrator | 2025-07-12 15:55:20 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state STARTED 2025-07-12 15:55:20.732417 | orchestrator | 2025-07-12 15:55:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:55:23.769484 | orchestrator | 2025-07-12 15:55:23 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:55:23.771461 | orchestrator | 2025-07-12 15:55:23 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:55:23.771511 | orchestrator | 2025-07-12 15:55:23 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:55:23.773159 | orchestrator | 2025-07-12 15:55:23 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state STARTED 2025-07-12 15:55:23.773182 | orchestrator | 2025-07-12 15:55:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:55:26.802682 | orchestrator | 2025-07-12 15:55:26 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:55:26.802904 | orchestrator | 2025-07-12 15:55:26 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state STARTED 2025-07-12 15:55:26.803620 | orchestrator | 2025-07-12 15:55:26 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:55:26.804270 | orchestrator | 2025-07-12 15:55:26 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state STARTED 2025-07-12 15:55:26.804293 | orchestrator | 2025-07-12 15:55:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:55:29.846246 | orchestrator | 2025-07-12 15:55:29 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:55:29.846355 | orchestrator | 2025-07-12 15:55:29 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:55:29.847788 | orchestrator | 2025-07-12 15:55:29 | INFO  | Task 5f3100bd-ecfa-49fe-951f-b8d7204e8e03 is in state SUCCESS 2025-07-12 15:55:29.849153 | orchestrator | 2025-07-12 15:55:29.849184 | orchestrator | 2025-07-12 15:55:29.849196 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 15:55:29.849207 | orchestrator | 2025-07-12 15:55:29.849218 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 15:55:29.849321 | orchestrator | Saturday 12 July 2025 15:51:44 +0000 (0:00:00.238) 0:00:00.238 ********* 2025-07-12 15:55:29.849337 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:55:29.849349 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:55:29.849360 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:55:29.849370 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:55:29.849381 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:55:29.849392 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:55:29.849402 | orchestrator | 2025-07-12 15:55:29.849430 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 15:55:29.849443 | orchestrator | Saturday 12 July 2025 15:51:44 +0000 (0:00:00.565) 0:00:00.804 ********* 2025-07-12 15:55:29.849454 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-07-12 15:55:29.849466 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-07-12 15:55:29.849488 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-07-12 15:55:29.849500 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-07-12 15:55:29.849534 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-07-12 15:55:29.849547 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-07-12 15:55:29.849557 | orchestrator | 2025-07-12 15:55:29.849568 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-07-12 15:55:29.849579 | orchestrator | 2025-07-12 15:55:29.849590 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-12 15:55:29.849601 | orchestrator | Saturday 12 July 2025 15:51:45 +0000 (0:00:00.673) 0:00:01.477 ********* 2025-07-12 15:55:29.849612 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:55:29.849646 | orchestrator | 2025-07-12 15:55:29.849657 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-07-12 15:55:29.849667 | orchestrator | Saturday 12 July 2025 15:51:46 +0000 (0:00:01.421) 0:00:02.899 ********* 2025-07-12 15:55:29.849678 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-07-12 15:55:29.849689 | orchestrator | 2025-07-12 15:55:29.849700 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-07-12 15:55:29.849711 | orchestrator | Saturday 12 July 2025 15:51:50 +0000 (0:00:03.632) 0:00:06.531 ********* 2025-07-12 15:55:29.849722 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-07-12 15:55:29.849732 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-07-12 15:55:29.849743 | orchestrator | 2025-07-12 15:55:29.849754 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-07-12 15:55:29.849787 | orchestrator | Saturday 12 July 2025 15:51:57 +0000 (0:00:07.441) 0:00:13.973 ********* 2025-07-12 15:55:29.849797 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 15:55:29.849808 | orchestrator | 2025-07-12 15:55:29.849819 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-07-12 15:55:29.849830 | orchestrator | Saturday 12 July 2025 15:52:01 +0000 (0:00:03.436) 0:00:17.409 ********* 2025-07-12 15:55:29.849840 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 15:55:29.849852 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-07-12 15:55:29.849862 | orchestrator | 2025-07-12 15:55:29.849873 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-07-12 15:55:29.849884 | orchestrator | Saturday 12 July 2025 15:52:05 +0000 (0:00:04.305) 0:00:21.714 ********* 2025-07-12 15:55:29.849894 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 15:55:29.849905 | orchestrator | 2025-07-12 15:55:29.849915 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-07-12 15:55:29.849938 | orchestrator | Saturday 12 July 2025 15:52:09 +0000 (0:00:03.746) 0:00:25.461 ********* 2025-07-12 15:55:29.849949 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-07-12 15:55:29.849960 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-07-12 15:55:29.849971 | orchestrator | 2025-07-12 15:55:29.849981 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-07-12 15:55:29.849992 | orchestrator | Saturday 12 July 2025 15:52:18 +0000 (0:00:08.957) 0:00:34.418 ********* 2025-07-12 15:55:29.850059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 15:55:29.850079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 15:55:29.850099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 15:55:29.850111 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.850128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.850140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.850161 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.850180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.850192 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.850204 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.850220 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.850238 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.850256 | orchestrator | 2025-07-12 15:55:29.850267 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-12 15:55:29.850278 | orchestrator | Saturday 12 July 2025 15:52:20 +0000 (0:00:02.554) 0:00:36.972 ********* 2025-07-12 15:55:29.850289 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:55:29.850300 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:55:29.850311 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:55:29.850322 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:55:29.850332 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:55:29.850343 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:55:29.850354 | orchestrator | 2025-07-12 15:55:29.850364 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-12 15:55:29.850375 | orchestrator | Saturday 12 July 2025 15:52:22 +0000 (0:00:01.525) 0:00:38.498 ********* 2025-07-12 15:55:29.850386 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:55:29.850397 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:55:29.850407 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:55:29.850419 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:55:29.850429 | orchestrator | 2025-07-12 15:55:29.850440 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-07-12 15:55:29.850451 | orchestrator | Saturday 12 July 2025 15:52:24 +0000 (0:00:01.819) 0:00:40.318 ********* 2025-07-12 15:55:29.850462 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-07-12 15:55:29.850472 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-07-12 15:55:29.850483 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-07-12 15:55:29.850494 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-07-12 15:55:29.850504 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-07-12 15:55:29.850515 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-07-12 15:55:29.850525 | orchestrator | 2025-07-12 15:55:29.850536 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-07-12 15:55:29.850547 | orchestrator | Saturday 12 July 2025 15:52:26 +0000 (0:00:01.860) 0:00:42.178 ********* 2025-07-12 15:55:29.850559 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-12 15:55:29.850575 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-12 15:55:29.850594 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-12 15:55:29.850612 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-12 15:55:29.850624 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-12 15:55:29.850635 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-12 15:55:29.850651 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-12 15:55:29.850676 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-12 15:55:29.850689 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-12 15:55:29.850700 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-12 15:55:29.850712 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-12 15:55:29.850727 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-12 15:55:29.850745 | orchestrator | 2025-07-12 15:55:29.850755 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-07-12 15:55:29.850831 | orchestrator | Saturday 12 July 2025 15:52:29 +0000 (0:00:03.263) 0:00:45.441 ********* 2025-07-12 15:55:29.850843 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 15:55:29.850893 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 15:55:29.850906 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 15:55:29.850917 | orchestrator | 2025-07-12 15:55:29.850928 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-07-12 15:55:29.850939 | orchestrator | Saturday 12 July 2025 15:52:31 +0000 (0:00:01.752) 0:00:47.194 ********* 2025-07-12 15:55:29.850957 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-07-12 15:55:29.850969 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-07-12 15:55:29.850979 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-07-12 15:55:29.850990 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-07-12 15:55:29.851001 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-07-12 15:55:29.851012 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-07-12 15:55:29.851023 | orchestrator | 2025-07-12 15:55:29.851033 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-07-12 15:55:29.851044 | orchestrator | Saturday 12 July 2025 15:52:33 +0000 (0:00:02.774) 0:00:49.969 ********* 2025-07-12 15:55:29.851055 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-07-12 15:55:29.851066 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-07-12 15:55:29.851076 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-07-12 15:55:29.851087 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-07-12 15:55:29.851098 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-07-12 15:55:29.851109 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-07-12 15:55:29.851119 | orchestrator | 2025-07-12 15:55:29.851130 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-07-12 15:55:29.851141 | orchestrator | Saturday 12 July 2025 15:52:35 +0000 (0:00:01.174) 0:00:51.143 ********* 2025-07-12 15:55:29.851152 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:55:29.851163 | orchestrator | 2025-07-12 15:55:29.851173 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-07-12 15:55:29.851184 | orchestrator | Saturday 12 July 2025 15:52:35 +0000 (0:00:00.151) 0:00:51.294 ********* 2025-07-12 15:55:29.851195 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:55:29.851206 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:55:29.851217 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:55:29.851228 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:55:29.851238 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:55:29.851249 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:55:29.851260 | orchestrator | 2025-07-12 15:55:29.851271 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-12 15:55:29.851282 | orchestrator | Saturday 12 July 2025 15:52:35 +0000 (0:00:00.719) 0:00:52.014 ********* 2025-07-12 15:55:29.851294 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:55:29.851314 | orchestrator | 2025-07-12 15:55:29.851325 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-07-12 15:55:29.851336 | orchestrator | Saturday 12 July 2025 15:52:37 +0000 (0:00:01.101) 0:00:53.115 ********* 2025-07-12 15:55:29.851359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 15:55:29.851371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 15:55:29.851390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 15:55:29.851402 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.851414 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.851437 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.851449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.851467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.851479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.851491 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.851508 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.851524 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.851536 | orchestrator | 2025-07-12 15:55:29.851547 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-07-12 15:55:29.851558 | orchestrator | Saturday 12 July 2025 15:52:40 +0000 (0:00:03.471) 0:00:56.586 ********* 2025-07-12 15:55:29.851575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 15:55:29.851587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 15:55:29.851599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 15:55:29.851616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 15:55:29.851632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 15:55:29.851644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 15:55:29.851655 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:55:29.851674 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 15:55:29.851686 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 15:55:29.851704 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:55:29.851715 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:55:29.851726 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:55:29.851738 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 15:55:29.851754 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 15:55:29.851812 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:55:29.851825 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 15:55:29.851845 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 15:55:29.851857 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:55:29.851868 | orchestrator | 2025-07-12 15:55:29.851879 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-07-12 15:55:29.851890 | orchestrator | Saturday 12 July 2025 15:52:42 +0000 (0:00:01.658) 0:00:58.245 ********* 2025-07-12 15:55:29.851901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 15:55:29.851921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 15:55:29.851937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 15:55:29.851949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 15:55:29.851968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 15:55:29.851980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 15:55:29.851997 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:55:29.852008 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:55:29.852019 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:55:29.852030 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 15:55:29.852041 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 15:55:29.852052 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:55:29.852068 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 15:55:29.852086 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 15:55:29.852105 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:55:29.852116 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 15:55:29.852128 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 15:55:29.852139 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:55:29.852150 | orchestrator | 2025-07-12 15:55:29.852161 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-07-12 15:55:29.852172 | orchestrator | Saturday 12 July 2025 15:52:45 +0000 (0:00:02.911) 0:01:01.156 ********* 2025-07-12 15:55:29.852187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 15:55:29.852199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 15:55:29.852217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 15:55:29.852236 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.852247 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.852261 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.852271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.852286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.852306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.852316 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.852327 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.852341 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.852351 | orchestrator | 2025-07-12 15:55:29.852361 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-07-12 15:55:29.852371 | orchestrator | Saturday 12 July 2025 15:52:48 +0000 (0:00:03.325) 0:01:04.482 ********* 2025-07-12 15:55:29.852381 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-07-12 15:55:29.852390 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:55:29.852400 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-07-12 15:55:29.852410 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:55:29.852427 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-07-12 15:55:29.852436 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:55:29.852446 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-07-12 15:55:29.852456 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-07-12 15:55:29.852731 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-07-12 15:55:29.852746 | orchestrator | 2025-07-12 15:55:29.852756 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-07-12 15:55:29.852782 | orchestrator | Saturday 12 July 2025 15:52:51 +0000 (0:00:02.655) 0:01:07.137 ********* 2025-07-12 15:55:29.852793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 15:55:29.852804 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.852814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 15:55:29.852830 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.852854 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.852865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 15:55:29.852875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.852885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.852899 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.852918 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.852933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.852944 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.852954 | orchestrator | 2025-07-12 15:55:29.852963 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-07-12 15:55:29.852973 | orchestrator | Saturday 12 July 2025 15:53:03 +0000 (0:00:11.938) 0:01:19.075 ********* 2025-07-12 15:55:29.852983 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:55:29.852993 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:55:29.853002 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:55:29.853012 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:55:29.853021 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:55:29.853031 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:55:29.853040 | orchestrator | 2025-07-12 15:55:29.853050 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-07-12 15:55:29.853059 | orchestrator | Saturday 12 July 2025 15:53:05 +0000 (0:00:02.472) 0:01:21.548 ********* 2025-07-12 15:55:29.853069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 15:55:29.853090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 15:55:29.853101 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:55:29.853116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 15:55:29.853127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 15:55:29.853137 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:55:29.853147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 15:55:29.853157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 15:55:29.853167 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:55:29.853181 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 15:55:29.853197 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 15:55:29.853207 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:55:29.853222 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 15:55:29.853233 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 15:55:29.853243 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:55:29.853253 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 15:55:29.853272 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 15:55:29.853282 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:55:29.853292 | orchestrator | 2025-07-12 15:55:29.853301 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-07-12 15:55:29.853311 | orchestrator | Saturday 12 July 2025 15:53:06 +0000 (0:00:01.095) 0:01:22.643 ********* 2025-07-12 15:55:29.853321 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:55:29.853331 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:55:29.853340 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:55:29.853351 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:55:29.853361 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:55:29.853372 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:55:29.853383 | orchestrator | 2025-07-12 15:55:29.853393 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-07-12 15:55:29.853404 | orchestrator | Saturday 12 July 2025 15:53:07 +0000 (0:00:00.660) 0:01:23.304 ********* 2025-07-12 15:55:29.853421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 15:55:29.853433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 15:55:29.853445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 15:55:29.853469 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.853481 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.853498 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.853510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.853521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.853537 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.853553 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.853565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.853582 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 15:55:29.853594 | orchestrator | 2025-07-12 15:55:29.853605 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-12 15:55:29.853616 | orchestrator | Saturday 12 July 2025 15:53:09 +0000 (0:00:02.679) 0:01:25.983 ********* 2025-07-12 15:55:29.853626 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:55:29.853637 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:55:29.853647 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:55:29.853658 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:55:29.853669 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:55:29.853679 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:55:29.853690 | orchestrator | 2025-07-12 15:55:29.853701 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-07-12 15:55:29.853711 | orchestrator | Saturday 12 July 2025 15:53:10 +0000 (0:00:00.694) 0:01:26.678 ********* 2025-07-12 15:55:29.853725 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:55:29.853735 | orchestrator | 2025-07-12 15:55:29.853745 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-07-12 15:55:29.853754 | orchestrator | Saturday 12 July 2025 15:53:13 +0000 (0:00:02.516) 0:01:29.195 ********* 2025-07-12 15:55:29.853807 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:55:29.853818 | orchestrator | 2025-07-12 15:55:29.853827 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-07-12 15:55:29.853837 | orchestrator | Saturday 12 July 2025 15:53:15 +0000 (0:00:02.485) 0:01:31.680 ********* 2025-07-12 15:55:29.853846 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:55:29.853855 | orchestrator | 2025-07-12 15:55:29.853865 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-12 15:55:29.853874 | orchestrator | Saturday 12 July 2025 15:53:34 +0000 (0:00:19.125) 0:01:50.805 ********* 2025-07-12 15:55:29.853884 | orchestrator | 2025-07-12 15:55:29.853893 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-12 15:55:29.853903 | orchestrator | Saturday 12 July 2025 15:53:34 +0000 (0:00:00.060) 0:01:50.865 ********* 2025-07-12 15:55:29.853912 | orchestrator | 2025-07-12 15:55:29.853922 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-12 15:55:29.853931 | orchestrator | Saturday 12 July 2025 15:53:34 +0000 (0:00:00.058) 0:01:50.924 ********* 2025-07-12 15:55:29.853940 | orchestrator | 2025-07-12 15:55:29.853950 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-12 15:55:29.853959 | orchestrator | Saturday 12 July 2025 15:53:34 +0000 (0:00:00.060) 0:01:50.985 ********* 2025-07-12 15:55:29.853969 | orchestrator | 2025-07-12 15:55:29.853978 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-12 15:55:29.853987 | orchestrator | Saturday 12 July 2025 15:53:35 +0000 (0:00:00.058) 0:01:51.043 ********* 2025-07-12 15:55:29.853997 | orchestrator | 2025-07-12 15:55:29.854006 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-12 15:55:29.854042 | orchestrator | Saturday 12 July 2025 15:53:35 +0000 (0:00:00.072) 0:01:51.116 ********* 2025-07-12 15:55:29.854054 | orchestrator | 2025-07-12 15:55:29.854064 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-07-12 15:55:29.854073 | orchestrator | Saturday 12 July 2025 15:53:35 +0000 (0:00:00.060) 0:01:51.176 ********* 2025-07-12 15:55:29.854083 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:55:29.854097 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:55:29.854107 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:55:29.854116 | orchestrator | 2025-07-12 15:55:29.854126 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-07-12 15:55:29.854135 | orchestrator | Saturday 12 July 2025 15:53:55 +0000 (0:00:20.169) 0:02:11.346 ********* 2025-07-12 15:55:29.854145 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:55:29.854154 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:55:29.854163 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:55:29.854173 | orchestrator | 2025-07-12 15:55:29.854182 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-07-12 15:55:29.854192 | orchestrator | Saturday 12 July 2025 15:54:03 +0000 (0:00:08.301) 0:02:19.648 ********* 2025-07-12 15:55:29.854201 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:55:29.854211 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:55:29.854220 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:55:29.854230 | orchestrator | 2025-07-12 15:55:29.854239 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-07-12 15:55:29.854248 | orchestrator | Saturday 12 July 2025 15:55:14 +0000 (0:01:11.025) 0:03:30.673 ********* 2025-07-12 15:55:29.854258 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:55:29.854267 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:55:29.854277 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:55:29.854286 | orchestrator | 2025-07-12 15:55:29.854296 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-07-12 15:55:29.854312 | orchestrator | Saturday 12 July 2025 15:55:25 +0000 (0:00:11.055) 0:03:41.728 ********* 2025-07-12 15:55:29.854321 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:55:29.854328 | orchestrator | 2025-07-12 15:55:29.854336 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:55:29.854349 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-12 15:55:29.854358 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-12 15:55:29.854366 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-12 15:55:29.854374 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-12 15:55:29.854381 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-12 15:55:29.854389 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-12 15:55:29.854397 | orchestrator | 2025-07-12 15:55:29.854405 | orchestrator | 2025-07-12 15:55:29.854413 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:55:29.854420 | orchestrator | Saturday 12 July 2025 15:55:26 +0000 (0:00:00.968) 0:03:42.697 ********* 2025-07-12 15:55:29.854428 | orchestrator | =============================================================================== 2025-07-12 15:55:29.854436 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 71.03s 2025-07-12 15:55:29.854444 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 20.17s 2025-07-12 15:55:29.854452 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.13s 2025-07-12 15:55:29.854459 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 11.94s 2025-07-12 15:55:29.854470 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 11.06s 2025-07-12 15:55:29.854478 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.96s 2025-07-12 15:55:29.854486 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 8.30s 2025-07-12 15:55:29.854494 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.44s 2025-07-12 15:55:29.854501 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.31s 2025-07-12 15:55:29.854509 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.75s 2025-07-12 15:55:29.854517 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.63s 2025-07-12 15:55:29.854525 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.47s 2025-07-12 15:55:29.854533 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.44s 2025-07-12 15:55:29.854540 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.33s 2025-07-12 15:55:29.854548 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.26s 2025-07-12 15:55:29.854556 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS key ------ 2.91s 2025-07-12 15:55:29.854563 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.78s 2025-07-12 15:55:29.854571 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.68s 2025-07-12 15:55:29.854579 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.66s 2025-07-12 15:55:29.854587 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.55s 2025-07-12 15:55:29.854600 | orchestrator | 2025-07-12 15:55:29 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:55:29.854611 | orchestrator | 2025-07-12 15:55:29 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state STARTED 2025-07-12 15:55:29.854619 | orchestrator | 2025-07-12 15:55:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:55:32.874532 | orchestrator | 2025-07-12 15:55:32 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:55:32.874685 | orchestrator | 2025-07-12 15:55:32 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:55:32.874702 | orchestrator | 2025-07-12 15:55:32 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:55:32.874939 | orchestrator | 2025-07-12 15:55:32 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state STARTED 2025-07-12 15:55:32.874971 | orchestrator | 2025-07-12 15:55:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:55:35.907196 | orchestrator | 2025-07-12 15:55:35 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:55:35.907655 | orchestrator | 2025-07-12 15:55:35 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:55:35.909231 | orchestrator | 2025-07-12 15:55:35 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:55:35.910678 | orchestrator | 2025-07-12 15:55:35 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state STARTED 2025-07-12 15:55:35.910711 | orchestrator | 2025-07-12 15:55:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:55:38.939411 | orchestrator | 2025-07-12 15:55:38 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:55:38.941002 | orchestrator | 2025-07-12 15:55:38 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:55:38.942512 | orchestrator | 2025-07-12 15:55:38 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:55:38.945411 | orchestrator | 2025-07-12 15:55:38 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state STARTED 2025-07-12 15:55:38.945445 | orchestrator | 2025-07-12 15:55:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:55:41.965803 | orchestrator | 2025-07-12 15:55:41 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:55:41.965896 | orchestrator | 2025-07-12 15:55:41 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:55:41.966219 | orchestrator | 2025-07-12 15:55:41 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:55:41.966963 | orchestrator | 2025-07-12 15:55:41 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state STARTED 2025-07-12 15:55:41.966989 | orchestrator | 2025-07-12 15:55:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:55:44.992348 | orchestrator | 2025-07-12 15:55:44 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:55:44.992946 | orchestrator | 2025-07-12 15:55:44 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:55:44.994197 | orchestrator | 2025-07-12 15:55:44 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:55:44.995895 | orchestrator | 2025-07-12 15:55:44 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state STARTED 2025-07-12 15:55:44.995951 | orchestrator | 2025-07-12 15:55:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:55:48.037498 | orchestrator | 2025-07-12 15:55:48 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:55:48.037878 | orchestrator | 2025-07-12 15:55:48 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:55:48.038523 | orchestrator | 2025-07-12 15:55:48 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:55:48.039352 | orchestrator | 2025-07-12 15:55:48 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state STARTED 2025-07-12 15:55:48.039395 | orchestrator | 2025-07-12 15:55:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:55:51.067978 | orchestrator | 2025-07-12 15:55:51 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:55:51.068178 | orchestrator | 2025-07-12 15:55:51 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:55:51.068933 | orchestrator | 2025-07-12 15:55:51 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:55:51.069659 | orchestrator | 2025-07-12 15:55:51 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state STARTED 2025-07-12 15:55:51.069681 | orchestrator | 2025-07-12 15:55:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:55:54.098953 | orchestrator | 2025-07-12 15:55:54 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:55:54.099281 | orchestrator | 2025-07-12 15:55:54 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:55:54.099995 | orchestrator | 2025-07-12 15:55:54 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:55:54.100566 | orchestrator | 2025-07-12 15:55:54 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state STARTED 2025-07-12 15:55:54.100681 | orchestrator | 2025-07-12 15:55:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:55:57.134241 | orchestrator | 2025-07-12 15:55:57 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:55:57.134332 | orchestrator | 2025-07-12 15:55:57 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:55:57.135706 | orchestrator | 2025-07-12 15:55:57 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:55:57.136679 | orchestrator | 2025-07-12 15:55:57 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state STARTED 2025-07-12 15:55:57.136701 | orchestrator | 2025-07-12 15:55:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:56:00.172691 | orchestrator | 2025-07-12 15:56:00 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:56:00.172940 | orchestrator | 2025-07-12 15:56:00 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:56:00.173843 | orchestrator | 2025-07-12 15:56:00 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:56:00.175387 | orchestrator | 2025-07-12 15:56:00 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state STARTED 2025-07-12 15:56:00.175413 | orchestrator | 2025-07-12 15:56:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:56:03.196945 | orchestrator | 2025-07-12 15:56:03 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:56:03.197260 | orchestrator | 2025-07-12 15:56:03 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:56:03.197952 | orchestrator | 2025-07-12 15:56:03 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:56:03.198655 | orchestrator | 2025-07-12 15:56:03 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state STARTED 2025-07-12 15:56:03.198858 | orchestrator | 2025-07-12 15:56:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:56:06.239702 | orchestrator | 2025-07-12 15:56:06 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:56:06.240071 | orchestrator | 2025-07-12 15:56:06 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:56:06.241622 | orchestrator | 2025-07-12 15:56:06 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:56:06.242371 | orchestrator | 2025-07-12 15:56:06 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state STARTED 2025-07-12 15:56:06.242399 | orchestrator | 2025-07-12 15:56:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:56:09.276020 | orchestrator | 2025-07-12 15:56:09 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:56:09.282696 | orchestrator | 2025-07-12 15:56:09 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:56:09.284520 | orchestrator | 2025-07-12 15:56:09 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:56:09.285954 | orchestrator | 2025-07-12 15:56:09 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state STARTED 2025-07-12 15:56:09.286104 | orchestrator | 2025-07-12 15:56:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:56:12.312730 | orchestrator | 2025-07-12 15:56:12 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:56:12.312966 | orchestrator | 2025-07-12 15:56:12 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:56:12.313857 | orchestrator | 2025-07-12 15:56:12 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:56:12.314561 | orchestrator | 2025-07-12 15:56:12 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state STARTED 2025-07-12 15:56:12.315246 | orchestrator | 2025-07-12 15:56:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:56:15.353680 | orchestrator | 2025-07-12 15:56:15 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:56:15.354330 | orchestrator | 2025-07-12 15:56:15 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:56:15.354796 | orchestrator | 2025-07-12 15:56:15 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:56:15.355649 | orchestrator | 2025-07-12 15:56:15 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state STARTED 2025-07-12 15:56:15.355819 | orchestrator | 2025-07-12 15:56:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:56:18.381437 | orchestrator | 2025-07-12 15:56:18 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:56:18.381986 | orchestrator | 2025-07-12 15:56:18 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:56:18.382468 | orchestrator | 2025-07-12 15:56:18 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:56:18.383296 | orchestrator | 2025-07-12 15:56:18 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state STARTED 2025-07-12 15:56:18.383322 | orchestrator | 2025-07-12 15:56:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:56:21.412306 | orchestrator | 2025-07-12 15:56:21 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:56:21.412800 | orchestrator | 2025-07-12 15:56:21 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:56:21.413652 | orchestrator | 2025-07-12 15:56:21 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:56:21.414881 | orchestrator | 2025-07-12 15:56:21 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state STARTED 2025-07-12 15:56:21.414952 | orchestrator | 2025-07-12 15:56:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:56:24.446244 | orchestrator | 2025-07-12 15:56:24 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:56:24.446391 | orchestrator | 2025-07-12 15:56:24 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:56:24.446727 | orchestrator | 2025-07-12 15:56:24 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:56:24.447283 | orchestrator | 2025-07-12 15:56:24 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state STARTED 2025-07-12 15:56:24.447305 | orchestrator | 2025-07-12 15:56:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:56:27.476314 | orchestrator | 2025-07-12 15:56:27 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:56:27.479434 | orchestrator | 2025-07-12 15:56:27 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:56:27.480180 | orchestrator | 2025-07-12 15:56:27 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:56:27.481394 | orchestrator | 2025-07-12 15:56:27 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state STARTED 2025-07-12 15:56:27.481418 | orchestrator | 2025-07-12 15:56:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:56:30.505344 | orchestrator | 2025-07-12 15:56:30 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:56:30.505496 | orchestrator | 2025-07-12 15:56:30 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:56:30.505932 | orchestrator | 2025-07-12 15:56:30 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:56:30.506572 | orchestrator | 2025-07-12 15:56:30 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state STARTED 2025-07-12 15:56:30.506604 | orchestrator | 2025-07-12 15:56:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:56:33.531814 | orchestrator | 2025-07-12 15:56:33 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:56:33.532018 | orchestrator | 2025-07-12 15:56:33 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:56:33.535542 | orchestrator | 2025-07-12 15:56:33 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:56:33.536025 | orchestrator | 2025-07-12 15:56:33 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state STARTED 2025-07-12 15:56:33.536081 | orchestrator | 2025-07-12 15:56:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:56:36.555755 | orchestrator | 2025-07-12 15:56:36 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:56:36.556810 | orchestrator | 2025-07-12 15:56:36 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:56:36.557165 | orchestrator | 2025-07-12 15:56:36 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:56:36.557868 | orchestrator | 2025-07-12 15:56:36 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state STARTED 2025-07-12 15:56:36.557895 | orchestrator | 2025-07-12 15:56:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:56:39.579221 | orchestrator | 2025-07-12 15:56:39 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:56:39.580699 | orchestrator | 2025-07-12 15:56:39 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:56:39.581164 | orchestrator | 2025-07-12 15:56:39 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:56:39.581709 | orchestrator | 2025-07-12 15:56:39 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state STARTED 2025-07-12 15:56:39.581721 | orchestrator | 2025-07-12 15:56:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:56:42.600242 | orchestrator | 2025-07-12 15:56:42 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:56:42.600434 | orchestrator | 2025-07-12 15:56:42 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:56:42.600982 | orchestrator | 2025-07-12 15:56:42 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:56:42.602171 | orchestrator | 2025-07-12 15:56:42 | INFO  | Task 3077c1ad-9160-46b2-8dd4-658437f660f6 is in state SUCCESS 2025-07-12 15:56:42.604273 | orchestrator | 2025-07-12 15:56:42.604412 | orchestrator | 2025-07-12 15:56:42.604429 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 15:56:42.604442 | orchestrator | 2025-07-12 15:56:42.604453 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 15:56:42.604465 | orchestrator | Saturday 12 July 2025 15:54:44 +0000 (0:00:00.253) 0:00:00.253 ********* 2025-07-12 15:56:42.604476 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:56:42.604487 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:56:42.604497 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:56:42.604508 | orchestrator | 2025-07-12 15:56:42.604520 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 15:56:42.604531 | orchestrator | Saturday 12 July 2025 15:54:45 +0000 (0:00:00.299) 0:00:00.552 ********* 2025-07-12 15:56:42.604542 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-07-12 15:56:42.604553 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-07-12 15:56:42.604563 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-07-12 15:56:42.604574 | orchestrator | 2025-07-12 15:56:42.604585 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-07-12 15:56:42.604595 | orchestrator | 2025-07-12 15:56:42.604606 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-07-12 15:56:42.604616 | orchestrator | Saturday 12 July 2025 15:54:45 +0000 (0:00:00.421) 0:00:00.974 ********* 2025-07-12 15:56:42.604627 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:56:42.604638 | orchestrator | 2025-07-12 15:56:42.604649 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-07-12 15:56:42.604659 | orchestrator | Saturday 12 July 2025 15:54:46 +0000 (0:00:00.527) 0:00:01.502 ********* 2025-07-12 15:56:42.604670 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-07-12 15:56:42.604681 | orchestrator | 2025-07-12 15:56:42.604691 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-07-12 15:56:42.604702 | orchestrator | Saturday 12 July 2025 15:54:49 +0000 (0:00:03.465) 0:00:04.967 ********* 2025-07-12 15:56:42.604712 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-07-12 15:56:42.604743 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-07-12 15:56:42.604754 | orchestrator | 2025-07-12 15:56:42.604765 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-07-12 15:56:42.604775 | orchestrator | Saturday 12 July 2025 15:54:56 +0000 (0:00:06.650) 0:00:11.618 ********* 2025-07-12 15:56:42.604787 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 15:56:42.604823 | orchestrator | 2025-07-12 15:56:42.604836 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-07-12 15:56:42.604848 | orchestrator | Saturday 12 July 2025 15:54:59 +0000 (0:00:03.233) 0:00:14.851 ********* 2025-07-12 15:56:42.604860 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 15:56:42.604872 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-07-12 15:56:42.604884 | orchestrator | 2025-07-12 15:56:42.604896 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-07-12 15:56:42.604908 | orchestrator | Saturday 12 July 2025 15:55:03 +0000 (0:00:04.244) 0:00:19.095 ********* 2025-07-12 15:56:42.604920 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 15:56:42.604944 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-07-12 15:56:42.604957 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-07-12 15:56:42.604969 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-07-12 15:56:42.604981 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-07-12 15:56:42.604993 | orchestrator | 2025-07-12 15:56:42.605006 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-07-12 15:56:42.605018 | orchestrator | Saturday 12 July 2025 15:55:20 +0000 (0:00:16.336) 0:00:35.432 ********* 2025-07-12 15:56:42.605030 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-07-12 15:56:42.605042 | orchestrator | 2025-07-12 15:56:42.605054 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-07-12 15:56:42.605066 | orchestrator | Saturday 12 July 2025 15:55:24 +0000 (0:00:04.372) 0:00:39.805 ********* 2025-07-12 15:56:42.605081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 15:56:42.605116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 15:56:42.605131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 15:56:42.605151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 15:56:42.605183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 15:56:42.605196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 15:56:42.605216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 15:56:42.605229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 15:56:42.605240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 15:56:42.605258 | orchestrator | 2025-07-12 15:56:42.605269 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-07-12 15:56:42.605280 | orchestrator | Saturday 12 July 2025 15:55:26 +0000 (0:00:01.723) 0:00:41.528 ********* 2025-07-12 15:56:42.605291 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-07-12 15:56:42.605301 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-07-12 15:56:42.605312 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-07-12 15:56:42.605322 | orchestrator | 2025-07-12 15:56:42.605333 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-07-12 15:56:42.605343 | orchestrator | Saturday 12 July 2025 15:55:27 +0000 (0:00:01.412) 0:00:42.941 ********* 2025-07-12 15:56:42.605354 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:56:42.605365 | orchestrator | 2025-07-12 15:56:42.605375 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-07-12 15:56:42.605386 | orchestrator | Saturday 12 July 2025 15:55:27 +0000 (0:00:00.161) 0:00:43.102 ********* 2025-07-12 15:56:42.605397 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:56:42.605407 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:56:42.605418 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:56:42.605428 | orchestrator | 2025-07-12 15:56:42.605439 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-07-12 15:56:42.605454 | orchestrator | Saturday 12 July 2025 15:55:28 +0000 (0:00:00.522) 0:00:43.624 ********* 2025-07-12 15:56:42.605465 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:56:42.605475 | orchestrator | 2025-07-12 15:56:42.605486 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-07-12 15:56:42.605496 | orchestrator | Saturday 12 July 2025 15:55:29 +0000 (0:00:00.898) 0:00:44.523 ********* 2025-07-12 15:56:42.605508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 15:56:42.605527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 15:56:42.605545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 15:56:42.605556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 15:56:42.605572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 15:56:42.605584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 15:56:42.605595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 15:56:42.605613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 15:56:42.605631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 15:56:42.605642 | orchestrator | 2025-07-12 15:56:42.605654 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-07-12 15:56:42.605664 | orchestrator | Saturday 12 July 2025 15:55:32 +0000 (0:00:03.739) 0:00:48.263 ********* 2025-07-12 15:56:42.605675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 15:56:42.605690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 15:56:42.605702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 15:56:42.605713 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:56:42.605766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 15:56:42.605786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 15:56:42.605798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 15:56:42.605820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 15:56:42.605831 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:56:42.605843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 15:56:42.605854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 15:56:42.605886 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:56:42.605898 | orchestrator | 2025-07-12 15:56:42.605915 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-07-12 15:56:42.605927 | orchestrator | Saturday 12 July 2025 15:55:33 +0000 (0:00:00.919) 0:00:49.182 ********* 2025-07-12 15:56:42.605938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 15:56:42.605949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 15:56:42.605961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 15:56:42.605972 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:56:42.605988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 15:56:42.606000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 15:56:42.606068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 15:56:42.606084 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:56:42.606096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 15:56:42.606107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 15:56:42.606125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 15:56:42.606147 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:56:42.606172 | orchestrator | 2025-07-12 15:56:42.606199 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-07-12 15:56:42.606217 | orchestrator | Saturday 12 July 2025 15:55:34 +0000 (0:00:00.677) 0:00:49.860 ********* 2025-07-12 15:56:42.606236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 15:56:42.606606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 15:56:42.606642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 15:56:42.606653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 15:56:42.606672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 15:56:42.606682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 15:56:42.606714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 15:56:42.606758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 15:56:42.606769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 15:56:42.606778 | orchestrator | 2025-07-12 15:56:42.606789 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-07-12 15:56:42.606799 | orchestrator | Saturday 12 July 2025 15:55:37 +0000 (0:00:03.374) 0:00:53.235 ********* 2025-07-12 15:56:42.606808 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:56:42.606818 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:56:42.606828 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:56:42.606837 | orchestrator | 2025-07-12 15:56:42.606846 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-07-12 15:56:42.606856 | orchestrator | Saturday 12 July 2025 15:55:39 +0000 (0:00:01.767) 0:00:55.002 ********* 2025-07-12 15:56:42.606865 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 15:56:42.606874 | orchestrator | 2025-07-12 15:56:42.606884 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-07-12 15:56:42.606894 | orchestrator | Saturday 12 July 2025 15:55:41 +0000 (0:00:01.379) 0:00:56.382 ********* 2025-07-12 15:56:42.606903 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:56:42.606914 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:56:42.606931 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:56:42.606947 | orchestrator | 2025-07-12 15:56:42.606964 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-07-12 15:56:42.606981 | orchestrator | Saturday 12 July 2025 15:55:41 +0000 (0:00:00.948) 0:00:57.331 ********* 2025-07-12 15:56:42.607008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 15:56:42.607036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 15:56:42.607047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 15:56:42.607057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 15:56:42.607068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 15:56:42.607088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 15:56:42.607098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 15:56:42.607113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 15:56:42.607123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 15:56:42.607133 | orchestrator | 2025-07-12 15:56:42.607143 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-07-12 15:56:42.607153 | orchestrator | Saturday 12 July 2025 15:55:51 +0000 (0:00:09.514) 0:01:06.847 ********* 2025-07-12 15:56:42.607163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 15:56:42.607177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 15:56:42.607193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 15:56:42.607204 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:56:42.607221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 15:56:42.607233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 15:56:42.607244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 15:56:42.607255 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:56:42.607266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 15:56:42.607287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 15:56:42.607300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 15:56:42.607311 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:56:42.607322 | orchestrator | 2025-07-12 15:56:42.607333 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-07-12 15:56:42.607344 | orchestrator | Saturday 12 July 2025 15:55:52 +0000 (0:00:01.023) 0:01:07.871 ********* 2025-07-12 15:56:42.607361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 15:56:42.607374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 15:56:42.607389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 15:56:42.607405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 15:56:42.607417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 15:56:42.607435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 15:56:42.607446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 15:56:42.607458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 15:56:42.607475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 15:56:42.607485 | orchestrator | 2025-07-12 15:56:42.607496 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-07-12 15:56:42.607507 | orchestrator | Saturday 12 July 2025 15:55:55 +0000 (0:00:03.476) 0:01:11.348 ********* 2025-07-12 15:56:42.607518 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:56:42.607529 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:56:42.607543 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:56:42.607554 | orchestrator | 2025-07-12 15:56:42.607564 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-07-12 15:56:42.607573 | orchestrator | Saturday 12 July 2025 15:55:56 +0000 (0:00:00.385) 0:01:11.733 ********* 2025-07-12 15:56:42.607582 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:56:42.607592 | orchestrator | 2025-07-12 15:56:42.607601 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-07-12 15:56:42.607611 | orchestrator | Saturday 12 July 2025 15:55:58 +0000 (0:00:02.321) 0:01:14.055 ********* 2025-07-12 15:56:42.607620 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:56:42.607630 | orchestrator | 2025-07-12 15:56:42.607639 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-07-12 15:56:42.607649 | orchestrator | Saturday 12 July 2025 15:56:01 +0000 (0:00:02.505) 0:01:16.560 ********* 2025-07-12 15:56:42.607658 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:56:42.607668 | orchestrator | 2025-07-12 15:56:42.607678 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-07-12 15:56:42.607687 | orchestrator | Saturday 12 July 2025 15:56:13 +0000 (0:00:12.699) 0:01:29.260 ********* 2025-07-12 15:56:42.607697 | orchestrator | 2025-07-12 15:56:42.607706 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-07-12 15:56:42.607715 | orchestrator | Saturday 12 July 2025 15:56:14 +0000 (0:00:00.175) 0:01:29.435 ********* 2025-07-12 15:56:42.607742 | orchestrator | 2025-07-12 15:56:42.607752 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-07-12 15:56:42.607761 | orchestrator | Saturday 12 July 2025 15:56:14 +0000 (0:00:00.123) 0:01:29.559 ********* 2025-07-12 15:56:42.607771 | orchestrator | 2025-07-12 15:56:42.607780 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-07-12 15:56:42.607789 | orchestrator | Saturday 12 July 2025 15:56:14 +0000 (0:00:00.385) 0:01:29.945 ********* 2025-07-12 15:56:42.607799 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:56:42.607808 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:56:42.607818 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:56:42.607827 | orchestrator | 2025-07-12 15:56:42.607837 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-07-12 15:56:42.607846 | orchestrator | Saturday 12 July 2025 15:56:22 +0000 (0:00:08.273) 0:01:38.218 ********* 2025-07-12 15:56:42.607856 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:56:42.607865 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:56:42.607881 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:56:42.607891 | orchestrator | 2025-07-12 15:56:42.607900 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-07-12 15:56:42.607910 | orchestrator | Saturday 12 July 2025 15:56:34 +0000 (0:00:11.203) 0:01:49.422 ********* 2025-07-12 15:56:42.607929 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:56:42.607938 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:56:42.607948 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:56:42.607957 | orchestrator | 2025-07-12 15:56:42.607966 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:56:42.607976 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-12 15:56:42.607987 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 15:56:42.607996 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 15:56:42.608006 | orchestrator | 2025-07-12 15:56:42.608015 | orchestrator | 2025-07-12 15:56:42.608024 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:56:42.608034 | orchestrator | Saturday 12 July 2025 15:56:40 +0000 (0:00:06.422) 0:01:55.844 ********* 2025-07-12 15:56:42.608043 | orchestrator | =============================================================================== 2025-07-12 15:56:42.608053 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.34s 2025-07-12 15:56:42.608062 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.70s 2025-07-12 15:56:42.608071 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 11.20s 2025-07-12 15:56:42.608081 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.52s 2025-07-12 15:56:42.608090 | orchestrator | barbican : Restart barbican-api container ------------------------------- 8.27s 2025-07-12 15:56:42.608100 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.65s 2025-07-12 15:56:42.608109 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 6.42s 2025-07-12 15:56:42.608118 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.37s 2025-07-12 15:56:42.608128 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.24s 2025-07-12 15:56:42.608137 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.74s 2025-07-12 15:56:42.608146 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.48s 2025-07-12 15:56:42.608156 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.47s 2025-07-12 15:56:42.608165 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.37s 2025-07-12 15:56:42.608174 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.23s 2025-07-12 15:56:42.608183 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.51s 2025-07-12 15:56:42.608193 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.32s 2025-07-12 15:56:42.608206 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.77s 2025-07-12 15:56:42.608216 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.72s 2025-07-12 15:56:42.608226 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.41s 2025-07-12 15:56:42.608235 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.38s 2025-07-12 15:56:42.608244 | orchestrator | 2025-07-12 15:56:42 | INFO  | Task 23ffe693-a988-42fc-9de5-f83e58018651 is in state STARTED 2025-07-12 15:56:42.608254 | orchestrator | 2025-07-12 15:56:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:56:45.627196 | orchestrator | 2025-07-12 15:56:45 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:56:45.627889 | orchestrator | 2025-07-12 15:56:45 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:56:45.628015 | orchestrator | 2025-07-12 15:56:45 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:56:45.628548 | orchestrator | 2025-07-12 15:56:45 | INFO  | Task 23ffe693-a988-42fc-9de5-f83e58018651 is in state STARTED 2025-07-12 15:56:45.628586 | orchestrator | 2025-07-12 15:56:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:56:48.651401 | orchestrator | 2025-07-12 15:56:48 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:56:48.652617 | orchestrator | 2025-07-12 15:56:48 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:56:48.652650 | orchestrator | 2025-07-12 15:56:48 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:56:48.654212 | orchestrator | 2025-07-12 15:56:48 | INFO  | Task 23ffe693-a988-42fc-9de5-f83e58018651 is in state STARTED 2025-07-12 15:56:48.654253 | orchestrator | 2025-07-12 15:56:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:56:51.678819 | orchestrator | 2025-07-12 15:56:51 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:56:51.680425 | orchestrator | 2025-07-12 15:56:51 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:56:51.680611 | orchestrator | 2025-07-12 15:56:51 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:56:51.683012 | orchestrator | 2025-07-12 15:56:51 | INFO  | Task 23ffe693-a988-42fc-9de5-f83e58018651 is in state STARTED 2025-07-12 15:56:51.683053 | orchestrator | 2025-07-12 15:56:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:56:54.705463 | orchestrator | 2025-07-12 15:56:54 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:56:54.705675 | orchestrator | 2025-07-12 15:56:54 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:56:54.706383 | orchestrator | 2025-07-12 15:56:54 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:56:54.707129 | orchestrator | 2025-07-12 15:56:54 | INFO  | Task 23ffe693-a988-42fc-9de5-f83e58018651 is in state STARTED 2025-07-12 15:56:54.707207 | orchestrator | 2025-07-12 15:56:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:56:57.729468 | orchestrator | 2025-07-12 15:56:57 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:56:57.730640 | orchestrator | 2025-07-12 15:56:57 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:56:57.731171 | orchestrator | 2025-07-12 15:56:57 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:56:57.731813 | orchestrator | 2025-07-12 15:56:57 | INFO  | Task 23ffe693-a988-42fc-9de5-f83e58018651 is in state STARTED 2025-07-12 15:56:57.731848 | orchestrator | 2025-07-12 15:56:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:57:00.769330 | orchestrator | 2025-07-12 15:57:00 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:57:00.769756 | orchestrator | 2025-07-12 15:57:00 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:57:00.771104 | orchestrator | 2025-07-12 15:57:00 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:57:00.771898 | orchestrator | 2025-07-12 15:57:00 | INFO  | Task 23ffe693-a988-42fc-9de5-f83e58018651 is in state STARTED 2025-07-12 15:57:00.771937 | orchestrator | 2025-07-12 15:57:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:57:03.818113 | orchestrator | 2025-07-12 15:57:03 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:57:03.819637 | orchestrator | 2025-07-12 15:57:03 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:57:03.820518 | orchestrator | 2025-07-12 15:57:03 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:57:03.825526 | orchestrator | 2025-07-12 15:57:03 | INFO  | Task 23ffe693-a988-42fc-9de5-f83e58018651 is in state STARTED 2025-07-12 15:57:03.825622 | orchestrator | 2025-07-12 15:57:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:57:06.868748 | orchestrator | 2025-07-12 15:57:06 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:57:06.869788 | orchestrator | 2025-07-12 15:57:06 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:57:06.870883 | orchestrator | 2025-07-12 15:57:06 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:57:06.874775 | orchestrator | 2025-07-12 15:57:06 | INFO  | Task 23ffe693-a988-42fc-9de5-f83e58018651 is in state STARTED 2025-07-12 15:57:06.874803 | orchestrator | 2025-07-12 15:57:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:57:09.908937 | orchestrator | 2025-07-12 15:57:09 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:57:09.911902 | orchestrator | 2025-07-12 15:57:09 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:57:09.914576 | orchestrator | 2025-07-12 15:57:09 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:57:09.917513 | orchestrator | 2025-07-12 15:57:09 | INFO  | Task 23ffe693-a988-42fc-9de5-f83e58018651 is in state STARTED 2025-07-12 15:57:09.917734 | orchestrator | 2025-07-12 15:57:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:57:12.961478 | orchestrator | 2025-07-12 15:57:12 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:57:12.963516 | orchestrator | 2025-07-12 15:57:12 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:57:12.965336 | orchestrator | 2025-07-12 15:57:12 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:57:12.969322 | orchestrator | 2025-07-12 15:57:12 | INFO  | Task 23ffe693-a988-42fc-9de5-f83e58018651 is in state STARTED 2025-07-12 15:57:12.969755 | orchestrator | 2025-07-12 15:57:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:57:16.022235 | orchestrator | 2025-07-12 15:57:16 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:57:16.027265 | orchestrator | 2025-07-12 15:57:16 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:57:16.030074 | orchestrator | 2025-07-12 15:57:16 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:57:16.031920 | orchestrator | 2025-07-12 15:57:16 | INFO  | Task 23ffe693-a988-42fc-9de5-f83e58018651 is in state STARTED 2025-07-12 15:57:16.032406 | orchestrator | 2025-07-12 15:57:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:57:19.072899 | orchestrator | 2025-07-12 15:57:19 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:57:19.073160 | orchestrator | 2025-07-12 15:57:19 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:57:19.073936 | orchestrator | 2025-07-12 15:57:19 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:57:19.074899 | orchestrator | 2025-07-12 15:57:19 | INFO  | Task 23ffe693-a988-42fc-9de5-f83e58018651 is in state STARTED 2025-07-12 15:57:19.074944 | orchestrator | 2025-07-12 15:57:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:57:22.116388 | orchestrator | 2025-07-12 15:57:22 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:57:22.118243 | orchestrator | 2025-07-12 15:57:22 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:57:22.119366 | orchestrator | 2025-07-12 15:57:22 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:57:22.120215 | orchestrator | 2025-07-12 15:57:22 | INFO  | Task 23ffe693-a988-42fc-9de5-f83e58018651 is in state STARTED 2025-07-12 15:57:22.120232 | orchestrator | 2025-07-12 15:57:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:57:25.164951 | orchestrator | 2025-07-12 15:57:25 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:57:25.165972 | orchestrator | 2025-07-12 15:57:25 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:57:25.169188 | orchestrator | 2025-07-12 15:57:25 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:57:25.172546 | orchestrator | 2025-07-12 15:57:25 | INFO  | Task 23ffe693-a988-42fc-9de5-f83e58018651 is in state STARTED 2025-07-12 15:57:25.172585 | orchestrator | 2025-07-12 15:57:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:57:28.211961 | orchestrator | 2025-07-12 15:57:28 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:57:28.212868 | orchestrator | 2025-07-12 15:57:28 | INFO  | Task b1aa95e3-e362-4acf-99a2-d60121bc0b6b is in state STARTED 2025-07-12 15:57:28.213862 | orchestrator | 2025-07-12 15:57:28 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:57:28.216654 | orchestrator | 2025-07-12 15:57:28 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:57:28.218230 | orchestrator | 2025-07-12 15:57:28 | INFO  | Task 23ffe693-a988-42fc-9de5-f83e58018651 is in state SUCCESS 2025-07-12 15:57:28.218264 | orchestrator | 2025-07-12 15:57:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:57:31.256426 | orchestrator | 2025-07-12 15:57:31 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:57:31.257857 | orchestrator | 2025-07-12 15:57:31 | INFO  | Task b1aa95e3-e362-4acf-99a2-d60121bc0b6b is in state STARTED 2025-07-12 15:57:31.257890 | orchestrator | 2025-07-12 15:57:31 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:57:31.258347 | orchestrator | 2025-07-12 15:57:31 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:57:31.258525 | orchestrator | 2025-07-12 15:57:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:57:34.301228 | orchestrator | 2025-07-12 15:57:34 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:57:34.304414 | orchestrator | 2025-07-12 15:57:34 | INFO  | Task b1aa95e3-e362-4acf-99a2-d60121bc0b6b is in state STARTED 2025-07-12 15:57:34.306282 | orchestrator | 2025-07-12 15:57:34 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:57:34.308150 | orchestrator | 2025-07-12 15:57:34 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:57:34.308179 | orchestrator | 2025-07-12 15:57:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:57:37.339359 | orchestrator | 2025-07-12 15:57:37 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:57:37.339702 | orchestrator | 2025-07-12 15:57:37 | INFO  | Task b1aa95e3-e362-4acf-99a2-d60121bc0b6b is in state STARTED 2025-07-12 15:57:37.340405 | orchestrator | 2025-07-12 15:57:37 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:57:37.342707 | orchestrator | 2025-07-12 15:57:37 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:57:37.342735 | orchestrator | 2025-07-12 15:57:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:57:40.377998 | orchestrator | 2025-07-12 15:57:40 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:57:40.379268 | orchestrator | 2025-07-12 15:57:40 | INFO  | Task b1aa95e3-e362-4acf-99a2-d60121bc0b6b is in state STARTED 2025-07-12 15:57:40.381170 | orchestrator | 2025-07-12 15:57:40 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:57:40.382821 | orchestrator | 2025-07-12 15:57:40 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:57:40.382868 | orchestrator | 2025-07-12 15:57:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:57:43.414265 | orchestrator | 2025-07-12 15:57:43 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:57:43.414352 | orchestrator | 2025-07-12 15:57:43 | INFO  | Task b1aa95e3-e362-4acf-99a2-d60121bc0b6b is in state STARTED 2025-07-12 15:57:43.414367 | orchestrator | 2025-07-12 15:57:43 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:57:43.414666 | orchestrator | 2025-07-12 15:57:43 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:57:43.414689 | orchestrator | 2025-07-12 15:57:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:57:46.438883 | orchestrator | 2025-07-12 15:57:46 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:57:46.440628 | orchestrator | 2025-07-12 15:57:46 | INFO  | Task b1aa95e3-e362-4acf-99a2-d60121bc0b6b is in state STARTED 2025-07-12 15:57:46.441040 | orchestrator | 2025-07-12 15:57:46 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:57:46.441921 | orchestrator | 2025-07-12 15:57:46 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:57:46.441944 | orchestrator | 2025-07-12 15:57:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:57:49.481819 | orchestrator | 2025-07-12 15:57:49 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:57:49.481908 | orchestrator | 2025-07-12 15:57:49 | INFO  | Task b1aa95e3-e362-4acf-99a2-d60121bc0b6b is in state STARTED 2025-07-12 15:57:49.482316 | orchestrator | 2025-07-12 15:57:49 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:57:49.483072 | orchestrator | 2025-07-12 15:57:49 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:57:49.483096 | orchestrator | 2025-07-12 15:57:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:57:52.505904 | orchestrator | 2025-07-12 15:57:52 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:57:52.506009 | orchestrator | 2025-07-12 15:57:52 | INFO  | Task b1aa95e3-e362-4acf-99a2-d60121bc0b6b is in state STARTED 2025-07-12 15:57:52.506458 | orchestrator | 2025-07-12 15:57:52 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:57:52.506971 | orchestrator | 2025-07-12 15:57:52 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:57:52.507104 | orchestrator | 2025-07-12 15:57:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:57:55.532559 | orchestrator | 2025-07-12 15:57:55 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:57:55.533133 | orchestrator | 2025-07-12 15:57:55 | INFO  | Task b1aa95e3-e362-4acf-99a2-d60121bc0b6b is in state STARTED 2025-07-12 15:57:55.534661 | orchestrator | 2025-07-12 15:57:55 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:57:55.534837 | orchestrator | 2025-07-12 15:57:55 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:57:55.534856 | orchestrator | 2025-07-12 15:57:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:57:58.563747 | orchestrator | 2025-07-12 15:57:58 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:57:58.565215 | orchestrator | 2025-07-12 15:57:58 | INFO  | Task b1aa95e3-e362-4acf-99a2-d60121bc0b6b is in state STARTED 2025-07-12 15:57:58.566170 | orchestrator | 2025-07-12 15:57:58 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:57:58.566679 | orchestrator | 2025-07-12 15:57:58 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:57:58.566850 | orchestrator | 2025-07-12 15:57:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:58:01.604144 | orchestrator | 2025-07-12 15:58:01 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:58:01.605609 | orchestrator | 2025-07-12 15:58:01 | INFO  | Task b1aa95e3-e362-4acf-99a2-d60121bc0b6b is in state STARTED 2025-07-12 15:58:01.607342 | orchestrator | 2025-07-12 15:58:01 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:58:01.609107 | orchestrator | 2025-07-12 15:58:01 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:58:01.609452 | orchestrator | 2025-07-12 15:58:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:58:04.649186 | orchestrator | 2025-07-12 15:58:04 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:58:04.651064 | orchestrator | 2025-07-12 15:58:04 | INFO  | Task b1aa95e3-e362-4acf-99a2-d60121bc0b6b is in state STARTED 2025-07-12 15:58:04.653791 | orchestrator | 2025-07-12 15:58:04 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:58:04.656395 | orchestrator | 2025-07-12 15:58:04 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:58:04.656815 | orchestrator | 2025-07-12 15:58:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:58:07.694969 | orchestrator | 2025-07-12 15:58:07 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:58:07.695756 | orchestrator | 2025-07-12 15:58:07 | INFO  | Task b1aa95e3-e362-4acf-99a2-d60121bc0b6b is in state STARTED 2025-07-12 15:58:07.696381 | orchestrator | 2025-07-12 15:58:07 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:58:07.697137 | orchestrator | 2025-07-12 15:58:07 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:58:07.697167 | orchestrator | 2025-07-12 15:58:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:58:10.732426 | orchestrator | 2025-07-12 15:58:10 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:58:10.735760 | orchestrator | 2025-07-12 15:58:10 | INFO  | Task b1aa95e3-e362-4acf-99a2-d60121bc0b6b is in state STARTED 2025-07-12 15:58:10.736107 | orchestrator | 2025-07-12 15:58:10 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:58:10.736624 | orchestrator | 2025-07-12 15:58:10 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:58:10.736725 | orchestrator | 2025-07-12 15:58:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:58:13.767935 | orchestrator | 2025-07-12 15:58:13 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:58:13.771631 | orchestrator | 2025-07-12 15:58:13 | INFO  | Task b1aa95e3-e362-4acf-99a2-d60121bc0b6b is in state STARTED 2025-07-12 15:58:13.772138 | orchestrator | 2025-07-12 15:58:13 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:58:13.772941 | orchestrator | 2025-07-12 15:58:13 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:58:13.773030 | orchestrator | 2025-07-12 15:58:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:58:16.831498 | orchestrator | 2025-07-12 15:58:16 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:58:16.834278 | orchestrator | 2025-07-12 15:58:16 | INFO  | Task b1aa95e3-e362-4acf-99a2-d60121bc0b6b is in state STARTED 2025-07-12 15:58:16.835625 | orchestrator | 2025-07-12 15:58:16 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:58:16.837033 | orchestrator | 2025-07-12 15:58:16 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:58:16.837276 | orchestrator | 2025-07-12 15:58:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:58:19.880317 | orchestrator | 2025-07-12 15:58:19 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:58:19.880624 | orchestrator | 2025-07-12 15:58:19 | INFO  | Task b1aa95e3-e362-4acf-99a2-d60121bc0b6b is in state STARTED 2025-07-12 15:58:19.881610 | orchestrator | 2025-07-12 15:58:19 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:58:19.882310 | orchestrator | 2025-07-12 15:58:19 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:58:19.882398 | orchestrator | 2025-07-12 15:58:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:58:22.915035 | orchestrator | 2025-07-12 15:58:22 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:58:22.915260 | orchestrator | 2025-07-12 15:58:22 | INFO  | Task b1aa95e3-e362-4acf-99a2-d60121bc0b6b is in state STARTED 2025-07-12 15:58:22.916153 | orchestrator | 2025-07-12 15:58:22 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:58:22.916782 | orchestrator | 2025-07-12 15:58:22 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:58:22.916851 | orchestrator | 2025-07-12 15:58:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:58:25.946864 | orchestrator | 2025-07-12 15:58:25 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:58:25.947563 | orchestrator | 2025-07-12 15:58:25 | INFO  | Task b1aa95e3-e362-4acf-99a2-d60121bc0b6b is in state STARTED 2025-07-12 15:58:25.948968 | orchestrator | 2025-07-12 15:58:25 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:58:25.950450 | orchestrator | 2025-07-12 15:58:25 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:58:25.950484 | orchestrator | 2025-07-12 15:58:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:58:28.993835 | orchestrator | 2025-07-12 15:58:28 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:58:28.994996 | orchestrator | 2025-07-12 15:58:28 | INFO  | Task b1aa95e3-e362-4acf-99a2-d60121bc0b6b is in state STARTED 2025-07-12 15:58:28.997188 | orchestrator | 2025-07-12 15:58:28 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:58:28.998919 | orchestrator | 2025-07-12 15:58:28 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:58:28.999065 | orchestrator | 2025-07-12 15:58:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:58:32.037753 | orchestrator | 2025-07-12 15:58:32 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state STARTED 2025-07-12 15:58:32.040403 | orchestrator | 2025-07-12 15:58:32 | INFO  | Task b1aa95e3-e362-4acf-99a2-d60121bc0b6b is in state STARTED 2025-07-12 15:58:32.041984 | orchestrator | 2025-07-12 15:58:32 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:58:32.042259 | orchestrator | 2025-07-12 15:58:32 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:58:32.043748 | orchestrator | 2025-07-12 15:58:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:58:35.084856 | orchestrator | 2025-07-12 15:58:35 | INFO  | Task b440ea61-34c8-43c5-9abf-32d523417a85 is in state SUCCESS 2025-07-12 15:58:35.086539 | orchestrator | 2025-07-12 15:58:35.086618 | orchestrator | 2025-07-12 15:58:35.086632 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-07-12 15:58:35.086645 | orchestrator | 2025-07-12 15:58:35.086656 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-07-12 15:58:35.086667 | orchestrator | Saturday 12 July 2025 15:56:46 +0000 (0:00:00.093) 0:00:00.094 ********* 2025-07-12 15:58:35.086701 | orchestrator | changed: [localhost] 2025-07-12 15:58:35.086713 | orchestrator | 2025-07-12 15:58:35.086725 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-07-12 15:58:35.086735 | orchestrator | Saturday 12 July 2025 15:56:48 +0000 (0:00:01.594) 0:00:01.688 ********* 2025-07-12 15:58:35.086746 | orchestrator | changed: [localhost] 2025-07-12 15:58:35.086758 | orchestrator | 2025-07-12 15:58:35.086769 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-07-12 15:58:35.086780 | orchestrator | Saturday 12 July 2025 15:57:20 +0000 (0:00:32.047) 0:00:33.736 ********* 2025-07-12 15:58:35.086791 | orchestrator | changed: [localhost] 2025-07-12 15:58:35.086801 | orchestrator | 2025-07-12 15:58:35.086812 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 15:58:35.086823 | orchestrator | 2025-07-12 15:58:35.086833 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 15:58:35.086845 | orchestrator | Saturday 12 July 2025 15:57:24 +0000 (0:00:04.231) 0:00:37.967 ********* 2025-07-12 15:58:35.086855 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:58:35.086866 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:58:35.086879 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:58:35.086891 | orchestrator | 2025-07-12 15:58:35.086903 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 15:58:35.086915 | orchestrator | Saturday 12 July 2025 15:57:25 +0000 (0:00:00.336) 0:00:38.303 ********* 2025-07-12 15:58:35.086928 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-07-12 15:58:35.086940 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-07-12 15:58:35.086953 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-07-12 15:58:35.086965 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-07-12 15:58:35.086977 | orchestrator | 2025-07-12 15:58:35.086990 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-07-12 15:58:35.087003 | orchestrator | skipping: no hosts matched 2025-07-12 15:58:35.087016 | orchestrator | 2025-07-12 15:58:35.087034 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:58:35.087054 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:58:35.087200 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:58:35.087226 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:58:35.087245 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 15:58:35.087333 | orchestrator | 2025-07-12 15:58:35.087443 | orchestrator | 2025-07-12 15:58:35.087465 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:58:35.087503 | orchestrator | Saturday 12 July 2025 15:57:25 +0000 (0:00:00.460) 0:00:38.764 ********* 2025-07-12 15:58:35.087516 | orchestrator | =============================================================================== 2025-07-12 15:58:35.087527 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 32.05s 2025-07-12 15:58:35.087538 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.23s 2025-07-12 15:58:35.087548 | orchestrator | Ensure the destination directory exists --------------------------------- 1.59s 2025-07-12 15:58:35.087559 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2025-07-12 15:58:35.087570 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2025-07-12 15:58:35.087581 | orchestrator | 2025-07-12 15:58:35.087591 | orchestrator | 2025-07-12 15:58:35.087602 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 15:58:35.087613 | orchestrator | 2025-07-12 15:58:35.087624 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 15:58:35.087634 | orchestrator | Saturday 12 July 2025 15:55:32 +0000 (0:00:00.410) 0:00:00.410 ********* 2025-07-12 15:58:35.087810 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:58:35.087844 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:58:35.087865 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:58:35.087884 | orchestrator | 2025-07-12 15:58:35.087904 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 15:58:35.087915 | orchestrator | Saturday 12 July 2025 15:55:32 +0000 (0:00:00.512) 0:00:00.922 ********* 2025-07-12 15:58:35.087926 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-07-12 15:58:35.087937 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-07-12 15:58:35.087949 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-07-12 15:58:35.087959 | orchestrator | 2025-07-12 15:58:35.087970 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-07-12 15:58:35.087986 | orchestrator | 2025-07-12 15:58:35.088005 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-07-12 15:58:35.088023 | orchestrator | Saturday 12 July 2025 15:55:33 +0000 (0:00:00.418) 0:00:01.341 ********* 2025-07-12 15:58:35.088041 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:58:35.088059 | orchestrator | 2025-07-12 15:58:35.088076 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-07-12 15:58:35.088095 | orchestrator | Saturday 12 July 2025 15:55:34 +0000 (0:00:00.624) 0:00:01.965 ********* 2025-07-12 15:58:35.088137 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-07-12 15:58:35.088158 | orchestrator | 2025-07-12 15:58:35.088176 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-07-12 15:58:35.088194 | orchestrator | Saturday 12 July 2025 15:55:37 +0000 (0:00:03.442) 0:00:05.408 ********* 2025-07-12 15:58:35.088213 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-07-12 15:58:35.088233 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-07-12 15:58:35.088251 | orchestrator | 2025-07-12 15:58:35.088281 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-07-12 15:58:35.088321 | orchestrator | Saturday 12 July 2025 15:55:44 +0000 (0:00:07.003) 0:00:12.411 ********* 2025-07-12 15:58:35.088333 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 15:58:35.088343 | orchestrator | 2025-07-12 15:58:35.088354 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-07-12 15:58:35.088390 | orchestrator | Saturday 12 July 2025 15:55:47 +0000 (0:00:03.481) 0:00:15.893 ********* 2025-07-12 15:58:35.088402 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 15:58:35.088413 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-07-12 15:58:35.088423 | orchestrator | 2025-07-12 15:58:35.088434 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-07-12 15:58:35.088445 | orchestrator | Saturday 12 July 2025 15:55:52 +0000 (0:00:04.092) 0:00:19.985 ********* 2025-07-12 15:58:35.088455 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 15:58:35.088466 | orchestrator | 2025-07-12 15:58:35.088477 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-07-12 15:58:35.088487 | orchestrator | Saturday 12 July 2025 15:55:55 +0000 (0:00:03.517) 0:00:23.503 ********* 2025-07-12 15:58:35.088498 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-07-12 15:58:35.088509 | orchestrator | 2025-07-12 15:58:35.088520 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-07-12 15:58:35.088530 | orchestrator | Saturday 12 July 2025 15:56:00 +0000 (0:00:04.577) 0:00:28.081 ********* 2025-07-12 15:58:35.088545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 15:58:35.088572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 15:58:35.088596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 15:58:35.088629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 15:58:35.088651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 15:58:35.088671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 15:58:35.088692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.088718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.088731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.088759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.088773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.088800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.088812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.088824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.088841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.088853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.088880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.088892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.088904 | orchestrator | 2025-07-12 15:58:35.088915 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-07-12 15:58:35.088926 | orchestrator | Saturday 12 July 2025 15:56:03 +0000 (0:00:03.644) 0:00:31.725 ********* 2025-07-12 15:58:35.088937 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:35.088948 | orchestrator | 2025-07-12 15:58:35.088959 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-07-12 15:58:35.088970 | orchestrator | Saturday 12 July 2025 15:56:03 +0000 (0:00:00.095) 0:00:31.821 ********* 2025-07-12 15:58:35.088981 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:35.088992 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:35.089003 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:35.089014 | orchestrator | 2025-07-12 15:58:35.089024 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-07-12 15:58:35.089035 | orchestrator | Saturday 12 July 2025 15:56:04 +0000 (0:00:00.225) 0:00:32.047 ********* 2025-07-12 15:58:35.089046 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:58:35.089057 | orchestrator | 2025-07-12 15:58:35.089068 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-07-12 15:58:35.089079 | orchestrator | Saturday 12 July 2025 15:56:05 +0000 (0:00:01.457) 0:00:33.504 ********* 2025-07-12 15:58:35.089091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 15:58:35.089114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 15:58:35.089134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 15:58:35.089146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 15:58:35.089158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 15:58:35.089170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 15:58:35.089186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.089204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.089223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.089235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.089247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.089258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.089270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.089300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.089312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.089331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.089343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.089355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.089398 | orchestrator | 2025-07-12 15:58:35.089411 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-07-12 15:58:35.089422 | orchestrator | Saturday 12 July 2025 15:56:12 +0000 (0:00:07.278) 0:00:40.782 ********* 2025-07-12 15:58:35.089434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 15:58:35.089457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 15:58:35.090773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.090833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.090850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.090870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.090888 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:35.090908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 15:58:35.090961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 15:58:35.091001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 15:58:35.091023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 15:58:35.091076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.091096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.091125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.091153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.091172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.091281 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:35.091321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.091342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.091403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.091428 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:35.091464 | orchestrator | 2025-07-12 15:58:35.091486 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-07-12 15:58:35.091507 | orchestrator | Saturday 12 July 2025 15:56:15 +0000 (0:00:02.242) 0:00:43.025 ********* 2025-07-12 15:58:35.091524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 15:58:35.091545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 15:58:35.091560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.091592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.091614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.091632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 15:58:35.091660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 15:58:35.091685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.091706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.091738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.091759 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:35.091778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.091798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.091868 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:35.091892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 15:58:35.091914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 15:58:35.091926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.091946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.091958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.091970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.091988 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:35.091999 | orchestrator | 2025-07-12 15:58:35.092011 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-07-12 15:58:35.092022 | orchestrator | Saturday 12 July 2025 15:56:17 +0000 (0:00:01.955) 0:00:44.981 ********* 2025-07-12 15:58:35.092032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 15:58:35.092052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 15:58:35.092069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 15:58:35.092080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 15:58:35.092096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 15:58:35.092107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 15:58:35.092117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.092132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.092149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.092159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.092176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.092186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.092196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.092211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.092221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.092238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.092248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.092265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.092275 | orchestrator | 2025-07-12 15:58:35.092285 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-07-12 15:58:35.092294 | orchestrator | Saturday 12 July 2025 15:56:23 +0000 (0:00:06.908) 0:00:51.889 ********* 2025-07-12 15:58:35.092304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 15:58:35.092319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 15:58:35.092335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 15:58:35.092346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 15:58:35.092388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 15:58:35.092404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 15:58:35.092414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.092429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.092439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.092456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.092472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.092482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.092492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.092502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.092517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.092532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.092552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.092562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.092572 | orchestrator | 2025-07-12 15:58:35.092581 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-07-12 15:58:35.092591 | orchestrator | Saturday 12 July 2025 15:56:42 +0000 (0:00:18.434) 0:01:10.324 ********* 2025-07-12 15:58:35.092601 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-07-12 15:58:35.092610 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-07-12 15:58:35.092619 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-07-12 15:58:35.092629 | orchestrator | 2025-07-12 15:58:35.092638 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-07-12 15:58:35.092648 | orchestrator | Saturday 12 July 2025 15:56:47 +0000 (0:00:05.464) 0:01:15.788 ********* 2025-07-12 15:58:35.092657 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-07-12 15:58:35.092666 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-07-12 15:58:35.092676 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-07-12 15:58:35.092685 | orchestrator | 2025-07-12 15:58:35.092694 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-07-12 15:58:35.092703 | orchestrator | Saturday 12 July 2025 15:56:51 +0000 (0:00:03.388) 0:01:19.176 ********* 2025-07-12 15:58:35.092718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 15:58:35.092734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 15:58:35.092752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 15:58:35.092762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 15:58:35.092772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 15:58:35.092782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.092797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.092821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.092832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.092841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.092851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 15:58:35.092861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.092876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.092894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.092911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.092921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.092932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.092942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.092951 | orchestrator | 2025-07-12 15:58:35.092961 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-07-12 15:58:35.092971 | orchestrator | Saturday 12 July 2025 15:56:54 +0000 (0:00:03.506) 0:01:22.683 ********* 2025-07-12 15:58:35.092980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 15:58:35.093005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 15:58:35.093022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 15:58:35.093032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 15:58:35.093042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.093052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.093066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.093083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 15:58:35.093232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.093249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.093259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.093270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 15:58:35.093280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.093303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.093320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.093330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.093340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.093351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.093419 | orchestrator | 2025-07-12 15:58:35.093433 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-07-12 15:58:35.093443 | orchestrator | Saturday 12 July 2025 15:56:58 +0000 (0:00:03.363) 0:01:26.046 ********* 2025-07-12 15:58:35.093452 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:35.093463 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:35.093472 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:35.093489 | orchestrator | 2025-07-12 15:58:35.093498 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-07-12 15:58:35.093508 | orchestrator | Saturday 12 July 2025 15:56:58 +0000 (0:00:00.444) 0:01:26.490 ********* 2025-07-12 15:58:35.093522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 15:58:35.093533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 15:58:35.093550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.093561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.093571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.093581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.093597 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:35.093612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 15:58:35.093622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 15:58:35.093638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.093649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.093659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.093669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.093685 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:35.093695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 15:58:35.093710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 15:58:35.093726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.093736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.093746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.093756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 15:58:35.093778 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:35.093787 | orchestrator | 2025-07-12 15:58:35.093797 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-07-12 15:58:35.093807 | orchestrator | Saturday 12 July 2025 15:56:59 +0000 (0:00:00.842) 0:01:27.333 ********* 2025-07-12 15:58:35.093821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 15:58:35.093832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 15:58:35.093848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 15:58:35.093859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 15:58:35.093876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 15:58:35.093886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 15:58:35.093900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.093910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.093925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.093936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.093953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.093963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.093973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.093988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.094004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.094014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.094058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.094074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 15:58:35.094085 | orchestrator | 2025-07-12 15:58:35.094095 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-07-12 15:58:35.094105 | orchestrator | Saturday 12 July 2025 15:57:04 +0000 (0:00:04.865) 0:01:32.199 ********* 2025-07-12 15:58:35.094114 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:35.094124 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:35.094133 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:35.094142 | orchestrator | 2025-07-12 15:58:35.094152 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-07-12 15:58:35.094161 | orchestrator | Saturday 12 July 2025 15:57:04 +0000 (0:00:00.414) 0:01:32.613 ********* 2025-07-12 15:58:35.094171 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-07-12 15:58:35.094181 | orchestrator | 2025-07-12 15:58:35.094190 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-07-12 15:58:35.094199 | orchestrator | Saturday 12 July 2025 15:57:07 +0000 (0:00:03.208) 0:01:35.821 ********* 2025-07-12 15:58:35.094209 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 15:58:35.094261 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-07-12 15:58:35.094273 | orchestrator | 2025-07-12 15:58:35.094282 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-07-12 15:58:35.094292 | orchestrator | Saturday 12 July 2025 15:57:10 +0000 (0:00:02.465) 0:01:38.287 ********* 2025-07-12 15:58:35.094301 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:58:35.094310 | orchestrator | 2025-07-12 15:58:35.094320 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-07-12 15:58:35.094329 | orchestrator | Saturday 12 July 2025 15:57:26 +0000 (0:00:16.488) 0:01:54.776 ********* 2025-07-12 15:58:35.094339 | orchestrator | 2025-07-12 15:58:35.094354 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-07-12 15:58:35.094425 | orchestrator | Saturday 12 July 2025 15:57:26 +0000 (0:00:00.064) 0:01:54.840 ********* 2025-07-12 15:58:35.094443 | orchestrator | 2025-07-12 15:58:35.094457 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-07-12 15:58:35.094467 | orchestrator | Saturday 12 July 2025 15:57:26 +0000 (0:00:00.062) 0:01:54.903 ********* 2025-07-12 15:58:35.094476 | orchestrator | 2025-07-12 15:58:35.094486 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-07-12 15:58:35.094495 | orchestrator | Saturday 12 July 2025 15:57:27 +0000 (0:00:00.072) 0:01:54.975 ********* 2025-07-12 15:58:35.094505 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:58:35.094514 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:58:35.094524 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:58:35.094533 | orchestrator | 2025-07-12 15:58:35.094543 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-07-12 15:58:35.094552 | orchestrator | Saturday 12 July 2025 15:57:35 +0000 (0:00:08.615) 0:02:03.591 ********* 2025-07-12 15:58:35.094562 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:58:35.094580 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:58:35.094589 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:58:35.094599 | orchestrator | 2025-07-12 15:58:35.094609 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-07-12 15:58:35.094626 | orchestrator | Saturday 12 July 2025 15:57:45 +0000 (0:00:09.392) 0:02:12.983 ********* 2025-07-12 15:58:35.094636 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:58:35.094646 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:58:35.094656 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:58:35.094665 | orchestrator | 2025-07-12 15:58:35.094675 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-07-12 15:58:35.094684 | orchestrator | Saturday 12 July 2025 15:57:54 +0000 (0:00:09.749) 0:02:22.733 ********* 2025-07-12 15:58:35.094693 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:58:35.094703 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:58:35.094712 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:58:35.094722 | orchestrator | 2025-07-12 15:58:35.094731 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-07-12 15:58:35.094741 | orchestrator | Saturday 12 July 2025 15:58:06 +0000 (0:00:11.505) 0:02:34.239 ********* 2025-07-12 15:58:35.094750 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:58:35.094760 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:58:35.094769 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:58:35.094778 | orchestrator | 2025-07-12 15:58:35.094788 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-07-12 15:58:35.094797 | orchestrator | Saturday 12 July 2025 15:58:18 +0000 (0:00:12.343) 0:02:46.582 ********* 2025-07-12 15:58:35.094807 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:58:35.094816 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:58:35.094826 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:58:35.094835 | orchestrator | 2025-07-12 15:58:35.094844 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-07-12 15:58:35.094854 | orchestrator | Saturday 12 July 2025 15:58:26 +0000 (0:00:07.424) 0:02:54.006 ********* 2025-07-12 15:58:35.094863 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:58:35.094870 | orchestrator | 2025-07-12 15:58:35.094878 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:58:35.094886 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-12 15:58:35.094895 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 15:58:35.094903 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 15:58:35.094911 | orchestrator | 2025-07-12 15:58:35.094919 | orchestrator | 2025-07-12 15:58:35.094926 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:58:35.094934 | orchestrator | Saturday 12 July 2025 15:58:34 +0000 (0:00:08.247) 0:03:02.254 ********* 2025-07-12 15:58:35.094942 | orchestrator | =============================================================================== 2025-07-12 15:58:35.094950 | orchestrator | designate : Copying over designate.conf -------------------------------- 18.44s 2025-07-12 15:58:35.094957 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.49s 2025-07-12 15:58:35.094965 | orchestrator | designate : Restart designate-mdns container --------------------------- 12.34s 2025-07-12 15:58:35.094973 | orchestrator | designate : Restart designate-producer container ----------------------- 11.51s 2025-07-12 15:58:35.094980 | orchestrator | designate : Restart designate-central container ------------------------- 9.75s 2025-07-12 15:58:35.094988 | orchestrator | designate : Restart designate-api container ----------------------------- 9.39s 2025-07-12 15:58:35.094996 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 8.62s 2025-07-12 15:58:35.095009 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 8.25s 2025-07-12 15:58:35.095017 | orchestrator | designate : Restart designate-worker container -------------------------- 7.42s 2025-07-12 15:58:35.095024 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 7.28s 2025-07-12 15:58:35.095032 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.00s 2025-07-12 15:58:35.095040 | orchestrator | designate : Copying over config.json files for services ----------------- 6.91s 2025-07-12 15:58:35.095047 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 5.46s 2025-07-12 15:58:35.095055 | orchestrator | designate : Check designate containers ---------------------------------- 4.87s 2025-07-12 15:58:35.095067 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.58s 2025-07-12 15:58:35.095075 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.09s 2025-07-12 15:58:35.095083 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.64s 2025-07-12 15:58:35.095091 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.52s 2025-07-12 15:58:35.095099 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.51s 2025-07-12 15:58:35.095106 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.48s 2025-07-12 15:58:35.095114 | orchestrator | 2025-07-12 15:58:35 | INFO  | Task b1aa95e3-e362-4acf-99a2-d60121bc0b6b is in state STARTED 2025-07-12 15:58:35.095122 | orchestrator | 2025-07-12 15:58:35 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:58:35.095130 | orchestrator | 2025-07-12 15:58:35 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:58:35.095138 | orchestrator | 2025-07-12 15:58:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:58:38.123300 | orchestrator | 2025-07-12 15:58:38 | INFO  | Task b1aa95e3-e362-4acf-99a2-d60121bc0b6b is in state SUCCESS 2025-07-12 15:58:38.124647 | orchestrator | 2025-07-12 15:58:38.124682 | orchestrator | 2025-07-12 15:58:38.124694 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 15:58:38.124705 | orchestrator | 2025-07-12 15:58:38.124735 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 15:58:38.124747 | orchestrator | Saturday 12 July 2025 15:57:31 +0000 (0:00:00.302) 0:00:00.302 ********* 2025-07-12 15:58:38.124758 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:58:38.124770 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:58:38.124780 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:58:38.124791 | orchestrator | 2025-07-12 15:58:38.124802 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 15:58:38.124813 | orchestrator | Saturday 12 July 2025 15:57:31 +0000 (0:00:00.498) 0:00:00.800 ********* 2025-07-12 15:58:38.124824 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-07-12 15:58:38.124835 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-07-12 15:58:38.124846 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-07-12 15:58:38.124856 | orchestrator | 2025-07-12 15:58:38.124867 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-07-12 15:58:38.124878 | orchestrator | 2025-07-12 15:58:38.124888 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-07-12 15:58:38.124899 | orchestrator | Saturday 12 July 2025 15:57:31 +0000 (0:00:00.333) 0:00:01.133 ********* 2025-07-12 15:58:38.124910 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:58:38.124932 | orchestrator | 2025-07-12 15:58:38.124943 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-07-12 15:58:38.124954 | orchestrator | Saturday 12 July 2025 15:57:32 +0000 (0:00:00.388) 0:00:01.522 ********* 2025-07-12 15:58:38.124986 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-07-12 15:58:38.124997 | orchestrator | 2025-07-12 15:58:38.125008 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-07-12 15:58:38.125019 | orchestrator | Saturday 12 July 2025 15:57:35 +0000 (0:00:03.566) 0:00:05.088 ********* 2025-07-12 15:58:38.125029 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-07-12 15:58:38.125040 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-07-12 15:58:38.125051 | orchestrator | 2025-07-12 15:58:38.125062 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-07-12 15:58:38.125072 | orchestrator | Saturday 12 July 2025 15:57:41 +0000 (0:00:06.085) 0:00:11.174 ********* 2025-07-12 15:58:38.125084 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 15:58:38.125094 | orchestrator | 2025-07-12 15:58:38.125105 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-07-12 15:58:38.125116 | orchestrator | Saturday 12 July 2025 15:57:45 +0000 (0:00:03.653) 0:00:14.827 ********* 2025-07-12 15:58:38.125126 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 15:58:38.125137 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-07-12 15:58:38.125147 | orchestrator | 2025-07-12 15:58:38.125158 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-07-12 15:58:38.125168 | orchestrator | Saturday 12 July 2025 15:57:49 +0000 (0:00:04.022) 0:00:18.850 ********* 2025-07-12 15:58:38.125179 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 15:58:38.125189 | orchestrator | 2025-07-12 15:58:38.125200 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-07-12 15:58:38.125210 | orchestrator | Saturday 12 July 2025 15:57:52 +0000 (0:00:03.380) 0:00:22.231 ********* 2025-07-12 15:58:38.125221 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-07-12 15:58:38.125232 | orchestrator | 2025-07-12 15:58:38.125243 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-07-12 15:58:38.125256 | orchestrator | Saturday 12 July 2025 15:57:58 +0000 (0:00:05.120) 0:00:27.352 ********* 2025-07-12 15:58:38.125267 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:38.125279 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:38.125291 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:38.125303 | orchestrator | 2025-07-12 15:58:38.125315 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-07-12 15:58:38.125339 | orchestrator | Saturday 12 July 2025 15:57:58 +0000 (0:00:00.393) 0:00:27.745 ********* 2025-07-12 15:58:38.125377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 15:58:38.125409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 15:58:38.125431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 15:58:38.125445 | orchestrator | 2025-07-12 15:58:38.125457 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-07-12 15:58:38.125470 | orchestrator | Saturday 12 July 2025 15:57:59 +0000 (0:00:00.994) 0:00:28.740 ********* 2025-07-12 15:58:38.125482 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:38.125495 | orchestrator | 2025-07-12 15:58:38.125507 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-07-12 15:58:38.125520 | orchestrator | Saturday 12 July 2025 15:57:59 +0000 (0:00:00.111) 0:00:28.852 ********* 2025-07-12 15:58:38.125532 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:38.125545 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:38.125557 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:38.125569 | orchestrator | 2025-07-12 15:58:38.125581 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-07-12 15:58:38.125594 | orchestrator | Saturday 12 July 2025 15:57:59 +0000 (0:00:00.355) 0:00:29.207 ********* 2025-07-12 15:58:38.125605 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 15:58:38.125616 | orchestrator | 2025-07-12 15:58:38.125627 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-07-12 15:58:38.125638 | orchestrator | Saturday 12 July 2025 15:58:00 +0000 (0:00:00.642) 0:00:29.850 ********* 2025-07-12 15:58:38.125670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 15:58:38.125691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 15:58:38.125713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 15:58:38.125725 | orchestrator | 2025-07-12 15:58:38.125735 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-07-12 15:58:38.125746 | orchestrator | Saturday 12 July 2025 15:58:01 +0000 (0:00:01.314) 0:00:31.165 ********* 2025-07-12 15:58:38.125757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 15:58:38.125773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 15:58:38.125785 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:38.125796 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:38.125815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 15:58:38.125833 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:38.125845 | orchestrator | 2025-07-12 15:58:38.125855 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-07-12 15:58:38.125866 | orchestrator | Saturday 12 July 2025 15:58:02 +0000 (0:00:00.645) 0:00:31.810 ********* 2025-07-12 15:58:38.125878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 15:58:38.125889 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:38.125900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 15:58:38.125911 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:38.125927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 15:58:38.125945 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:38.125956 | orchestrator | 2025-07-12 15:58:38.125967 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-07-12 15:58:38.125978 | orchestrator | Saturday 12 July 2025 15:58:03 +0000 (0:00:00.771) 0:00:32.582 ********* 2025-07-12 15:58:38.125995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 15:58:38.126008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 15:58:38.126074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 15:58:38.126087 | orchestrator | 2025-07-12 15:58:38.126098 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-07-12 15:58:38.126109 | orchestrator | Saturday 12 July 2025 15:58:04 +0000 (0:00:01.536) 0:00:34.119 ********* 2025-07-12 15:58:38.126161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 15:58:38.126185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 15:58:38.126205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 15:58:38.126217 | orchestrator | 2025-07-12 15:58:38.126228 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-07-12 15:58:38.126240 | orchestrator | Saturday 12 July 2025 15:58:08 +0000 (0:00:03.519) 0:00:37.638 ********* 2025-07-12 15:58:38.126250 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-07-12 15:58:38.126261 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-07-12 15:58:38.126273 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-07-12 15:58:38.126284 | orchestrator | 2025-07-12 15:58:38.126294 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-07-12 15:58:38.126305 | orchestrator | Saturday 12 July 2025 15:58:10 +0000 (0:00:02.224) 0:00:39.863 ********* 2025-07-12 15:58:38.126316 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:58:38.126327 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:58:38.126338 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:58:38.126349 | orchestrator | 2025-07-12 15:58:38.126388 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-07-12 15:58:38.126408 | orchestrator | Saturday 12 July 2025 15:58:11 +0000 (0:00:01.311) 0:00:41.174 ********* 2025-07-12 15:58:38.126428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 15:58:38.126455 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:38.126472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 15:58:38.126483 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:38.126502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 15:58:38.126514 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:38.126525 | orchestrator | 2025-07-12 15:58:38.126536 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-07-12 15:58:38.126547 | orchestrator | Saturday 12 July 2025 15:58:12 +0000 (0:00:00.432) 0:00:41.607 ********* 2025-07-12 15:58:38.126558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 15:58:38.126569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 15:58:38.126597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 15:58:38.126609 | orchestrator | 2025-07-12 15:58:38.126620 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-07-12 15:58:38.126630 | orchestrator | Saturday 12 July 2025 15:58:13 +0000 (0:00:01.264) 0:00:42.871 ********* 2025-07-12 15:58:38.126641 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:58:38.126652 | orchestrator | 2025-07-12 15:58:38.126662 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-07-12 15:58:38.126673 | orchestrator | Saturday 12 July 2025 15:58:15 +0000 (0:00:02.327) 0:00:45.199 ********* 2025-07-12 15:58:38.126684 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:58:38.126694 | orchestrator | 2025-07-12 15:58:38.126705 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-07-12 15:58:38.126715 | orchestrator | Saturday 12 July 2025 15:58:18 +0000 (0:00:02.363) 0:00:47.562 ********* 2025-07-12 15:58:38.126732 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:58:38.126743 | orchestrator | 2025-07-12 15:58:38.126754 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-07-12 15:58:38.126765 | orchestrator | Saturday 12 July 2025 15:58:32 +0000 (0:00:14.168) 0:01:01.731 ********* 2025-07-12 15:58:38.126776 | orchestrator | 2025-07-12 15:58:38.126786 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-07-12 15:58:38.126797 | orchestrator | Saturday 12 July 2025 15:58:32 +0000 (0:00:00.062) 0:01:01.793 ********* 2025-07-12 15:58:38.126808 | orchestrator | 2025-07-12 15:58:38.126818 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-07-12 15:58:38.126829 | orchestrator | Saturday 12 July 2025 15:58:32 +0000 (0:00:00.059) 0:01:01.853 ********* 2025-07-12 15:58:38.126839 | orchestrator | 2025-07-12 15:58:38.126850 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-07-12 15:58:38.126860 | orchestrator | Saturday 12 July 2025 15:58:32 +0000 (0:00:00.063) 0:01:01.917 ********* 2025-07-12 15:58:38.126871 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:58:38.126882 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:58:38.126893 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:58:38.126903 | orchestrator | 2025-07-12 15:58:38.126914 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:58:38.126925 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 15:58:38.126937 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 15:58:38.126954 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 15:58:38.126966 | orchestrator | 2025-07-12 15:58:38.126976 | orchestrator | 2025-07-12 15:58:38.126987 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:58:38.126998 | orchestrator | Saturday 12 July 2025 15:58:37 +0000 (0:00:05.207) 0:01:07.124 ********* 2025-07-12 15:58:38.127008 | orchestrator | =============================================================================== 2025-07-12 15:58:38.127019 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.17s 2025-07-12 15:58:38.127029 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.09s 2025-07-12 15:58:38.127040 | orchestrator | placement : Restart placement-api container ----------------------------- 5.21s 2025-07-12 15:58:38.127051 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 5.12s 2025-07-12 15:58:38.127061 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.02s 2025-07-12 15:58:38.127072 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.65s 2025-07-12 15:58:38.127082 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.57s 2025-07-12 15:58:38.127093 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.52s 2025-07-12 15:58:38.127103 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.38s 2025-07-12 15:58:38.127114 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.36s 2025-07-12 15:58:38.127124 | orchestrator | placement : Creating placement databases -------------------------------- 2.33s 2025-07-12 15:58:38.127135 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 2.22s 2025-07-12 15:58:38.127146 | orchestrator | placement : Copying over config.json files for services ----------------- 1.54s 2025-07-12 15:58:38.127156 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.31s 2025-07-12 15:58:38.127167 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.31s 2025-07-12 15:58:38.127177 | orchestrator | placement : Check placement containers ---------------------------------- 1.26s 2025-07-12 15:58:38.127188 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.99s 2025-07-12 15:58:38.127198 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.77s 2025-07-12 15:58:38.127213 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.65s 2025-07-12 15:58:38.127224 | orchestrator | placement : include_tasks ----------------------------------------------- 0.64s 2025-07-12 15:58:38.127235 | orchestrator | 2025-07-12 15:58:38 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:58:38.127246 | orchestrator | 2025-07-12 15:58:38 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state STARTED 2025-07-12 15:58:38.127256 | orchestrator | 2025-07-12 15:58:38 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 15:58:38.127267 | orchestrator | 2025-07-12 15:58:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:58:41.159222 | orchestrator | 2025-07-12 15:58:41 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 15:58:41.160180 | orchestrator | 2025-07-12 15:58:41 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:58:41.162930 | orchestrator | 2025-07-12 15:58:41 | INFO  | Task 4989e6ca-e63f-42c8-ab70-fa7f662df2dd is in state SUCCESS 2025-07-12 15:58:41.164716 | orchestrator | 2025-07-12 15:58:41.164749 | orchestrator | 2025-07-12 15:58:41.164761 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 15:58:41.164773 | orchestrator | 2025-07-12 15:58:41.164784 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 15:58:41.164816 | orchestrator | Saturday 12 July 2025 15:54:36 +0000 (0:00:00.252) 0:00:00.252 ********* 2025-07-12 15:58:41.164828 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:58:41.164839 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:58:41.164850 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:58:41.164861 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:58:41.164871 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:58:41.164881 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:58:41.164892 | orchestrator | 2025-07-12 15:58:41.164903 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 15:58:41.164914 | orchestrator | Saturday 12 July 2025 15:54:37 +0000 (0:00:00.649) 0:00:00.902 ********* 2025-07-12 15:58:41.164924 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-07-12 15:58:41.164935 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-07-12 15:58:41.164946 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-07-12 15:58:41.164956 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-07-12 15:58:41.164967 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-07-12 15:58:41.164977 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-07-12 15:58:41.164988 | orchestrator | 2025-07-12 15:58:41.164999 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-07-12 15:58:41.165009 | orchestrator | 2025-07-12 15:58:41.165020 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-07-12 15:58:41.165031 | orchestrator | Saturday 12 July 2025 15:54:37 +0000 (0:00:00.589) 0:00:01.491 ********* 2025-07-12 15:58:41.165042 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:58:41.165053 | orchestrator | 2025-07-12 15:58:41.165064 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-07-12 15:58:41.165075 | orchestrator | Saturday 12 July 2025 15:54:38 +0000 (0:00:01.125) 0:00:02.617 ********* 2025-07-12 15:58:41.165086 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:58:41.165096 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:58:41.165107 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:58:41.165118 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:58:41.165129 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:58:41.165139 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:58:41.165150 | orchestrator | 2025-07-12 15:58:41.165167 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-07-12 15:58:41.165178 | orchestrator | Saturday 12 July 2025 15:54:40 +0000 (0:00:01.277) 0:00:03.895 ********* 2025-07-12 15:58:41.165189 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:58:41.165200 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:58:41.165210 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:58:41.165221 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:58:41.165231 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:58:41.165242 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:58:41.165252 | orchestrator | 2025-07-12 15:58:41.165263 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-07-12 15:58:41.165273 | orchestrator | Saturday 12 July 2025 15:54:41 +0000 (0:00:01.146) 0:00:05.042 ********* 2025-07-12 15:58:41.165287 | orchestrator | ok: [testbed-node-0] => { 2025-07-12 15:58:41.165299 | orchestrator |  "changed": false, 2025-07-12 15:58:41.165311 | orchestrator |  "msg": "All assertions passed" 2025-07-12 15:58:41.165324 | orchestrator | } 2025-07-12 15:58:41.165337 | orchestrator | ok: [testbed-node-1] => { 2025-07-12 15:58:41.165454 | orchestrator |  "changed": false, 2025-07-12 15:58:41.165469 | orchestrator |  "msg": "All assertions passed" 2025-07-12 15:58:41.165481 | orchestrator | } 2025-07-12 15:58:41.165494 | orchestrator | ok: [testbed-node-2] => { 2025-07-12 15:58:41.165506 | orchestrator |  "changed": false, 2025-07-12 15:58:41.165518 | orchestrator |  "msg": "All assertions passed" 2025-07-12 15:58:41.165530 | orchestrator | } 2025-07-12 15:58:41.165551 | orchestrator | ok: [testbed-node-3] => { 2025-07-12 15:58:41.165564 | orchestrator |  "changed": false, 2025-07-12 15:58:41.165576 | orchestrator |  "msg": "All assertions passed" 2025-07-12 15:58:41.165594 | orchestrator | } 2025-07-12 15:58:41.165606 | orchestrator | ok: [testbed-node-4] => { 2025-07-12 15:58:41.165619 | orchestrator |  "changed": false, 2025-07-12 15:58:41.165631 | orchestrator |  "msg": "All assertions passed" 2025-07-12 15:58:41.165643 | orchestrator | } 2025-07-12 15:58:41.165653 | orchestrator | ok: [testbed-node-5] => { 2025-07-12 15:58:41.165664 | orchestrator |  "changed": false, 2025-07-12 15:58:41.165674 | orchestrator |  "msg": "All assertions passed" 2025-07-12 15:58:41.165685 | orchestrator | } 2025-07-12 15:58:41.165695 | orchestrator | 2025-07-12 15:58:41.165706 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-07-12 15:58:41.165728 | orchestrator | Saturday 12 July 2025 15:54:42 +0000 (0:00:00.868) 0:00:05.910 ********* 2025-07-12 15:58:41.165740 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:41.165779 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:41.165791 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:41.165802 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:58:41.165812 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:58:41.165823 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:58:41.165833 | orchestrator | 2025-07-12 15:58:41.165844 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-07-12 15:58:41.165855 | orchestrator | Saturday 12 July 2025 15:54:42 +0000 (0:00:00.615) 0:00:06.526 ********* 2025-07-12 15:58:41.165866 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-07-12 15:58:41.165876 | orchestrator | 2025-07-12 15:58:41.165887 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-07-12 15:58:41.165897 | orchestrator | Saturday 12 July 2025 15:54:46 +0000 (0:00:03.984) 0:00:10.511 ********* 2025-07-12 15:58:41.165908 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-07-12 15:58:41.165920 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-07-12 15:58:41.165931 | orchestrator | 2025-07-12 15:58:41.165962 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-07-12 15:58:41.165973 | orchestrator | Saturday 12 July 2025 15:54:53 +0000 (0:00:06.572) 0:00:17.083 ********* 2025-07-12 15:58:41.165984 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 15:58:41.166061 | orchestrator | 2025-07-12 15:58:41.166077 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-07-12 15:58:41.166088 | orchestrator | Saturday 12 July 2025 15:54:56 +0000 (0:00:03.329) 0:00:20.412 ********* 2025-07-12 15:58:41.166098 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 15:58:41.166109 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-07-12 15:58:41.166120 | orchestrator | 2025-07-12 15:58:41.166130 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-07-12 15:58:41.166141 | orchestrator | Saturday 12 July 2025 15:55:00 +0000 (0:00:03.983) 0:00:24.396 ********* 2025-07-12 15:58:41.166151 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 15:58:41.166162 | orchestrator | 2025-07-12 15:58:41.166172 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-07-12 15:58:41.166183 | orchestrator | Saturday 12 July 2025 15:55:04 +0000 (0:00:03.848) 0:00:28.244 ********* 2025-07-12 15:58:41.166193 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-07-12 15:58:41.166204 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-07-12 15:58:41.166214 | orchestrator | 2025-07-12 15:58:41.166225 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-07-12 15:58:41.166236 | orchestrator | Saturday 12 July 2025 15:55:11 +0000 (0:00:07.339) 0:00:35.584 ********* 2025-07-12 15:58:41.166246 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:41.166265 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:41.166275 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:41.166286 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:58:41.166296 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:58:41.166307 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:58:41.166318 | orchestrator | 2025-07-12 15:58:41.166328 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-07-12 15:58:41.166339 | orchestrator | Saturday 12 July 2025 15:55:12 +0000 (0:00:00.878) 0:00:36.462 ********* 2025-07-12 15:58:41.166377 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:41.166389 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:41.166399 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:41.166442 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:58:41.166457 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:58:41.166467 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:58:41.166478 | orchestrator | 2025-07-12 15:58:41.166489 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-07-12 15:58:41.166499 | orchestrator | Saturday 12 July 2025 15:55:15 +0000 (0:00:02.517) 0:00:38.980 ********* 2025-07-12 15:58:41.166510 | orchestrator | ok: [testbed-node-0] 2025-07-12 15:58:41.166521 | orchestrator | ok: [testbed-node-2] 2025-07-12 15:58:41.166538 | orchestrator | ok: [testbed-node-1] 2025-07-12 15:58:41.166549 | orchestrator | ok: [testbed-node-3] 2025-07-12 15:58:41.166559 | orchestrator | ok: [testbed-node-4] 2025-07-12 15:58:41.166601 | orchestrator | ok: [testbed-node-5] 2025-07-12 15:58:41.166613 | orchestrator | 2025-07-12 15:58:41.166624 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-07-12 15:58:41.166635 | orchestrator | Saturday 12 July 2025 15:55:16 +0000 (0:00:01.475) 0:00:40.455 ********* 2025-07-12 15:58:41.166645 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:41.166656 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:41.166666 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:58:41.166677 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:58:41.166687 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:41.166698 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:58:41.166708 | orchestrator | 2025-07-12 15:58:41.166719 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-07-12 15:58:41.166729 | orchestrator | Saturday 12 July 2025 15:55:18 +0000 (0:00:01.878) 0:00:42.334 ********* 2025-07-12 15:58:41.166750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 15:58:41.166781 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 15:58:41.166802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 15:58:41.166814 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 15:58:41.166826 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 15:58:41.166841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 15:58:41.166853 | orchestrator | 2025-07-12 15:58:41.166864 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-07-12 15:58:41.166876 | orchestrator | Saturday 12 July 2025 15:55:20 +0000 (0:00:02.388) 0:00:44.722 ********* 2025-07-12 15:58:41.166887 | orchestrator | [WARNING]: Skipped 2025-07-12 15:58:41.166927 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-07-12 15:58:41.166939 | orchestrator | due to this access issue: 2025-07-12 15:58:41.166957 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-07-12 15:58:41.166967 | orchestrator | a directory 2025-07-12 15:58:41.166978 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 15:58:41.166989 | orchestrator | 2025-07-12 15:58:41.167009 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-07-12 15:58:41.167020 | orchestrator | Saturday 12 July 2025 15:55:21 +0000 (0:00:00.711) 0:00:45.434 ********* 2025-07-12 15:58:41.167031 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 15:58:41.167043 | orchestrator | 2025-07-12 15:58:41.167054 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-07-12 15:58:41.167064 | orchestrator | Saturday 12 July 2025 15:55:22 +0000 (0:00:00.956) 0:00:46.391 ********* 2025-07-12 15:58:41.167076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 15:58:41.167088 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 15:58:41.167100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 15:58:41.167151 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 15:58:41.167215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 15:58:41.167230 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 15:58:41.167241 | orchestrator | 2025-07-12 15:58:41.167252 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-07-12 15:58:41.167263 | orchestrator | Saturday 12 July 2025 15:55:25 +0000 (0:00:02.691) 0:00:49.082 ********* 2025-07-12 15:58:41.167310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 15:58:41.167325 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:41.167341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 15:58:41.167383 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:41.167403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 15:58:41.167415 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:41.167459 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 15:58:41.167470 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:58:41.167481 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 15:58:41.167492 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:58:41.167503 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 15:58:41.167514 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:58:41.167525 | orchestrator | 2025-07-12 15:58:41.167536 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-07-12 15:58:41.167547 | orchestrator | Saturday 12 July 2025 15:55:28 +0000 (0:00:02.802) 0:00:51.884 ********* 2025-07-12 15:58:41.167589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 15:58:41.167610 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:41.167628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 15:58:41.167641 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:41.167652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 15:58:41.167663 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:41.167674 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 15:58:41.167715 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:58:41.167738 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 15:58:41.167758 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:58:41.167769 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 15:58:41.167780 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:58:41.167791 | orchestrator | 2025-07-12 15:58:41.167802 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-07-12 15:58:41.167819 | orchestrator | Saturday 12 July 2025 15:55:31 +0000 (0:00:03.242) 0:00:55.127 ********* 2025-07-12 15:58:41.167831 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:41.167842 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:41.167852 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:58:41.167863 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:58:41.167873 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:41.167884 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:58:41.167894 | orchestrator | 2025-07-12 15:58:41.167905 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-07-12 15:58:41.167916 | orchestrator | Saturday 12 July 2025 15:55:33 +0000 (0:00:02.355) 0:00:57.483 ********* 2025-07-12 15:58:41.167927 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:41.167937 | orchestrator | 2025-07-12 15:58:41.167948 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-07-12 15:58:41.167959 | orchestrator | Saturday 12 July 2025 15:55:33 +0000 (0:00:00.113) 0:00:57.596 ********* 2025-07-12 15:58:41.167969 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:41.167980 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:41.167990 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:41.168001 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:58:41.168011 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:58:41.168022 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:58:41.168032 | orchestrator | 2025-07-12 15:58:41.168043 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-07-12 15:58:41.168054 | orchestrator | Saturday 12 July 2025 15:55:34 +0000 (0:00:00.600) 0:00:58.196 ********* 2025-07-12 15:58:41.168065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 15:58:41.168082 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:41.168094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 15:58:41.168105 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:41.168120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 15:58:41.168132 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:41.168149 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 15:58:41.168161 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:58:41.168173 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 15:58:41.168184 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:58:41.168195 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 15:58:41.168215 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:58:41.168226 | orchestrator | 2025-07-12 15:58:41.168237 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-07-12 15:58:41.168247 | orchestrator | Saturday 12 July 2025 15:55:36 +0000 (0:00:01.962) 0:01:00.158 ********* 2025-07-12 15:58:41.168263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 15:58:41.168281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 15:58:41.168293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 15:58:41.168305 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 15:58:41.168324 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 15:58:41.168339 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 15:58:41.168372 | orchestrator | 2025-07-12 15:58:41.168384 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-07-12 15:58:41.168395 | orchestrator | Saturday 12 July 2025 15:55:39 +0000 (0:00:03.080) 0:01:03.238 ********* 2025-07-12 15:58:41.168412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 15:58:41.168424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 15:58:41.168442 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 15:58:41.168454 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 15:58:41.168470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 15:58:41.168488 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 15:58:41.168500 | orchestrator | 2025-07-12 15:58:41.168511 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-07-12 15:58:41.168521 | orchestrator | Saturday 12 July 2025 15:55:45 +0000 (0:00:06.267) 0:01:09.506 ********* 2025-07-12 15:58:41.168532 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 15:58:41.168550 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 15:58:41.168561 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:58:41.168572 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:58:41.168583 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 15:58:41.168594 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:58:41.168609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 15:58:41.168628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 15:58:41.168640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 15:58:41.168658 | orchestrator | 2025-07-12 15:58:41.168669 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-07-12 15:58:41.168680 | orchestrator | Saturday 12 July 2025 15:55:49 +0000 (0:00:03.633) 0:01:13.140 ********* 2025-07-12 15:58:41.168690 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:58:41.168701 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:58:41.168712 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:58:41.168722 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:58:41.168733 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:58:41.168744 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:58:41.168754 | orchestrator | 2025-07-12 15:58:41.168765 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-07-12 15:58:41.168776 | orchestrator | Saturday 12 July 2025 15:55:52 +0000 (0:00:03.067) 0:01:16.208 ********* 2025-07-12 15:58:41.168786 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 15:58:41.168798 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:58:41.168813 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 15:58:41.168825 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:58:41.168843 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 15:58:41.168861 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:58:41.168872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 15:58:41.168884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 15:58:41.168895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 15:58:41.168906 | orchestrator | 2025-07-12 15:58:41.168917 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-07-12 15:58:41.168935 | orchestrator | Saturday 12 July 2025 15:55:56 +0000 (0:00:03.806) 0:01:20.014 ********* 2025-07-12 15:58:41.168946 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:41.168956 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:41.168967 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:58:41.168978 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:41.168988 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:58:41.168999 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:58:41.169009 | orchestrator | 2025-07-12 15:58:41.169020 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-07-12 15:58:41.169030 | orchestrator | Saturday 12 July 2025 15:55:58 +0000 (0:00:02.317) 0:01:22.331 ********* 2025-07-12 15:58:41.169041 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:41.169051 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:41.169062 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:41.169081 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:58:41.169091 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:58:41.169102 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:58:41.169112 | orchestrator | 2025-07-12 15:58:41.169123 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-07-12 15:58:41.169134 | orchestrator | Saturday 12 July 2025 15:56:01 +0000 (0:00:03.395) 0:01:25.727 ********* 2025-07-12 15:58:41.169144 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:41.169155 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:58:41.169166 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:41.169182 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:58:41.169194 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:41.169204 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:58:41.169214 | orchestrator | 2025-07-12 15:58:41.169225 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-07-12 15:58:41.169236 | orchestrator | Saturday 12 July 2025 15:56:03 +0000 (0:00:01.997) 0:01:27.725 ********* 2025-07-12 15:58:41.169246 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:41.169257 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:41.169267 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:41.169278 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:58:41.169288 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:58:41.169299 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:58:41.169309 | orchestrator | 2025-07-12 15:58:41.169320 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-07-12 15:58:41.169330 | orchestrator | Saturday 12 July 2025 15:56:06 +0000 (0:00:02.856) 0:01:30.582 ********* 2025-07-12 15:58:41.169341 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:41.169383 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:41.169394 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:41.169404 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:58:41.169415 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:58:41.169425 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:58:41.169436 | orchestrator | 2025-07-12 15:58:41.169446 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-07-12 15:58:41.169457 | orchestrator | Saturday 12 July 2025 15:56:09 +0000 (0:00:02.452) 0:01:33.035 ********* 2025-07-12 15:58:41.169467 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:58:41.169478 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:58:41.169488 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:41.169498 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:41.169509 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:58:41.169519 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:41.169530 | orchestrator | 2025-07-12 15:58:41.169540 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-07-12 15:58:41.169551 | orchestrator | Saturday 12 July 2025 15:56:11 +0000 (0:00:02.643) 0:01:35.678 ********* 2025-07-12 15:58:41.169561 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-12 15:58:41.169572 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:41.169583 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-12 15:58:41.169593 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:41.169604 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-12 15:58:41.169614 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:41.169625 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-12 15:58:41.169635 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:58:41.169646 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-12 15:58:41.169656 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:58:41.169667 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-12 15:58:41.169684 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:58:41.169695 | orchestrator | 2025-07-12 15:58:41.169705 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-07-12 15:58:41.169716 | orchestrator | Saturday 12 July 2025 15:56:15 +0000 (0:00:03.628) 0:01:39.306 ********* 2025-07-12 15:58:41.169731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 15:58:41.169743 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:41.169761 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 15:58:41.169772 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:58:41.169784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 15:58:41.169795 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:41.169806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 15:58:41.169823 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:41.169834 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 15:58:41.169846 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:58:41.169861 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 15:58:41.169872 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:58:41.169883 | orchestrator | 2025-07-12 15:58:41.169894 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-07-12 15:58:41.169904 | orchestrator | Saturday 12 July 2025 15:56:18 +0000 (0:00:03.213) 0:01:42.520 ********* 2025-07-12 15:58:41.169922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 15:58:41.169934 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:41.169945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 15:58:41.169962 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:41.169973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 15:58:41.169984 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:41.169999 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 15:58:41.170010 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:58:41.170070 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 15:58:41.170082 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:58:41.170101 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 15:58:41.170113 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:58:41.170124 | orchestrator | 2025-07-12 15:58:41.170135 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-07-12 15:58:41.170146 | orchestrator | Saturday 12 July 2025 15:56:20 +0000 (0:00:01.977) 0:01:44.497 ********* 2025-07-12 15:58:41.170157 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:41.170167 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:41.170178 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:41.170196 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:58:41.170207 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:58:41.170217 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:58:41.170242 | orchestrator | 2025-07-12 15:58:41.170253 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-07-12 15:58:41.170264 | orchestrator | Saturday 12 July 2025 15:56:22 +0000 (0:00:01.986) 0:01:46.484 ********* 2025-07-12 15:58:41.170274 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:41.170285 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:41.170295 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:41.170306 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:58:41.170316 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:58:41.170327 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:58:41.170337 | orchestrator | 2025-07-12 15:58:41.170377 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-07-12 15:58:41.170397 | orchestrator | Saturday 12 July 2025 15:56:27 +0000 (0:00:04.625) 0:01:51.110 ********* 2025-07-12 15:58:41.170416 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:41.170434 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:41.170446 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:41.170457 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:58:41.170467 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:58:41.170478 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:58:41.170488 | orchestrator | 2025-07-12 15:58:41.170499 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-07-12 15:58:41.170510 | orchestrator | Saturday 12 July 2025 15:56:29 +0000 (0:00:02.519) 0:01:53.629 ********* 2025-07-12 15:58:41.170521 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:41.170531 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:41.170542 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:41.170552 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:58:41.170562 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:58:41.170573 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:58:41.170583 | orchestrator | 2025-07-12 15:58:41.170594 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-07-12 15:58:41.170605 | orchestrator | Saturday 12 July 2025 15:56:31 +0000 (0:00:01.950) 0:01:55.579 ********* 2025-07-12 15:58:41.170615 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:41.170626 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:41.170636 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:41.170646 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:58:41.170657 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:58:41.170667 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:58:41.170677 | orchestrator | 2025-07-12 15:58:41.170688 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-07-12 15:58:41.170699 | orchestrator | Saturday 12 July 2025 15:56:34 +0000 (0:00:02.293) 0:01:57.873 ********* 2025-07-12 15:58:41.170709 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:41.170719 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:41.170730 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:41.170740 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:58:41.170751 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:58:41.170761 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:58:41.170772 | orchestrator | 2025-07-12 15:58:41.170787 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-07-12 15:58:41.170798 | orchestrator | Saturday 12 July 2025 15:56:37 +0000 (0:00:03.499) 0:02:01.373 ********* 2025-07-12 15:58:41.170809 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:41.170819 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:41.170830 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:41.170840 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:58:41.170851 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:58:41.170861 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:58:41.170879 | orchestrator | 2025-07-12 15:58:41.170890 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-07-12 15:58:41.170901 | orchestrator | Saturday 12 July 2025 15:56:40 +0000 (0:00:03.013) 0:02:04.387 ********* 2025-07-12 15:58:41.170911 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:41.170921 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:58:41.170932 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:41.170942 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:41.170953 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:58:41.170963 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:58:41.170974 | orchestrator | 2025-07-12 15:58:41.170984 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-07-12 15:58:41.170995 | orchestrator | Saturday 12 July 2025 15:56:43 +0000 (0:00:02.587) 0:02:06.974 ********* 2025-07-12 15:58:41.171006 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:41.171022 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:41.171033 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:41.171044 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:58:41.171054 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:58:41.171065 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:58:41.171075 | orchestrator | 2025-07-12 15:58:41.171086 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-07-12 15:58:41.171097 | orchestrator | Saturday 12 July 2025 15:56:46 +0000 (0:00:03.599) 0:02:10.573 ********* 2025-07-12 15:58:41.171108 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:41.171118 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:41.171128 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:41.171139 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:58:41.171149 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:58:41.171160 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:58:41.171170 | orchestrator | 2025-07-12 15:58:41.171181 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-07-12 15:58:41.171191 | orchestrator | Saturday 12 July 2025 15:56:49 +0000 (0:00:02.718) 0:02:13.292 ********* 2025-07-12 15:58:41.171202 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-12 15:58:41.171212 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:41.171223 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-12 15:58:41.171234 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:58:41.171245 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-12 15:58:41.171255 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:41.171266 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-12 15:58:41.171277 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:41.171287 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-12 15:58:41.171298 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:58:41.171309 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-12 15:58:41.171319 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:58:41.171330 | orchestrator | 2025-07-12 15:58:41.171341 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-07-12 15:58:41.171402 | orchestrator | Saturday 12 July 2025 15:56:52 +0000 (0:00:02.980) 0:02:16.273 ********* 2025-07-12 15:58:41.171414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 15:58:41.171433 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:41.171449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 15:58:41.171460 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:41.171481 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 15:58:41.171492 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:58:41.171501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 15:58:41.171511 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:41.171521 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 15:58:41.171536 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:58:41.171546 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 15:58:41.171556 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:58:41.171565 | orchestrator | 2025-07-12 15:58:41.171575 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-07-12 15:58:41.171584 | orchestrator | Saturday 12 July 2025 15:56:54 +0000 (0:00:02.388) 0:02:18.662 ********* 2025-07-12 15:58:41.171598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 15:58:41.171615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 15:58:41.171626 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 15:58:41.171636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 15:58:41.171660 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 15:58:41.171670 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 15:58:41.171680 | orchestrator | 2025-07-12 15:58:41.171690 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-07-12 15:58:41.171704 | orchestrator | Saturday 12 July 2025 15:56:58 +0000 (0:00:03.833) 0:02:22.495 ********* 2025-07-12 15:58:41.171714 | orchestrator | skipping: [testbed-node-0] 2025-07-12 15:58:41.171724 | orchestrator | skipping: [testbed-node-1] 2025-07-12 15:58:41.171733 | orchestrator | skipping: [testbed-node-2] 2025-07-12 15:58:41.171743 | orchestrator | skipping: [testbed-node-3] 2025-07-12 15:58:41.171752 | orchestrator | skipping: [testbed-node-4] 2025-07-12 15:58:41.171761 | orchestrator | skipping: [testbed-node-5] 2025-07-12 15:58:41.171770 | orchestrator | 2025-07-12 15:58:41.171780 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-07-12 15:58:41.171789 | orchestrator | Saturday 12 July 2025 15:56:59 +0000 (0:00:00.679) 0:02:23.174 ********* 2025-07-12 15:58:41.171799 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:58:41.171808 | orchestrator | 2025-07-12 15:58:41.171817 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-07-12 15:58:41.171827 | orchestrator | Saturday 12 July 2025 15:57:01 +0000 (0:00:02.211) 0:02:25.386 ********* 2025-07-12 15:58:41.171836 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:58:41.171846 | orchestrator | 2025-07-12 15:58:41.171855 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-07-12 15:58:41.171865 | orchestrator | Saturday 12 July 2025 15:57:04 +0000 (0:00:02.595) 0:02:27.982 ********* 2025-07-12 15:58:41.171874 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:58:41.171883 | orchestrator | 2025-07-12 15:58:41.171892 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-12 15:58:41.171907 | orchestrator | Saturday 12 July 2025 15:57:47 +0000 (0:00:43.314) 0:03:11.297 ********* 2025-07-12 15:58:41.171917 | orchestrator | 2025-07-12 15:58:41.171926 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-12 15:58:41.171935 | orchestrator | Saturday 12 July 2025 15:57:47 +0000 (0:00:00.210) 0:03:11.507 ********* 2025-07-12 15:58:41.171945 | orchestrator | 2025-07-12 15:58:41.171954 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-12 15:58:41.171963 | orchestrator | Saturday 12 July 2025 15:57:48 +0000 (0:00:00.480) 0:03:11.988 ********* 2025-07-12 15:58:41.171973 | orchestrator | 2025-07-12 15:58:41.171982 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-12 15:58:41.171992 | orchestrator | Saturday 12 July 2025 15:57:48 +0000 (0:00:00.107) 0:03:12.096 ********* 2025-07-12 15:58:41.172001 | orchestrator | 2025-07-12 15:58:41.172010 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-12 15:58:41.172020 | orchestrator | Saturday 12 July 2025 15:57:48 +0000 (0:00:00.157) 0:03:12.254 ********* 2025-07-12 15:58:41.172029 | orchestrator | 2025-07-12 15:58:41.172038 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-12 15:58:41.172048 | orchestrator | Saturday 12 July 2025 15:57:48 +0000 (0:00:00.146) 0:03:12.400 ********* 2025-07-12 15:58:41.172057 | orchestrator | 2025-07-12 15:58:41.172066 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-07-12 15:58:41.172075 | orchestrator | Saturday 12 July 2025 15:57:48 +0000 (0:00:00.117) 0:03:12.518 ********* 2025-07-12 15:58:41.172085 | orchestrator | changed: [testbed-node-0] 2025-07-12 15:58:41.172094 | orchestrator | changed: [testbed-node-1] 2025-07-12 15:58:41.172103 | orchestrator | changed: [testbed-node-2] 2025-07-12 15:58:41.172113 | orchestrator | 2025-07-12 15:58:41.172122 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-07-12 15:58:41.172131 | orchestrator | Saturday 12 July 2025 15:58:16 +0000 (0:00:27.325) 0:03:39.843 ********* 2025-07-12 15:58:41.172141 | orchestrator | changed: [testbed-node-4] 2025-07-12 15:58:41.172150 | orchestrator | changed: [testbed-node-5] 2025-07-12 15:58:41.172159 | orchestrator | changed: [testbed-node-3] 2025-07-12 15:58:41.172169 | orchestrator | 2025-07-12 15:58:41.172178 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 15:58:41.172188 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-07-12 15:58:41.172197 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-07-12 15:58:41.172207 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-07-12 15:58:41.172221 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-07-12 15:58:41.172230 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-07-12 15:58:41.172240 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-07-12 15:58:41.172249 | orchestrator | 2025-07-12 15:58:41.172259 | orchestrator | 2025-07-12 15:58:41.172268 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 15:58:41.172278 | orchestrator | Saturday 12 July 2025 15:58:39 +0000 (0:00:23.477) 0:04:03.321 ********* 2025-07-12 15:58:41.172287 | orchestrator | =============================================================================== 2025-07-12 15:58:41.172296 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 43.31s 2025-07-12 15:58:41.172311 | orchestrator | neutron : Restart neutron-server container ----------------------------- 27.33s 2025-07-12 15:58:41.172320 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 23.48s 2025-07-12 15:58:41.172329 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.34s 2025-07-12 15:58:41.172361 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.57s 2025-07-12 15:58:41.172373 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.27s 2025-07-12 15:58:41.172382 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.63s 2025-07-12 15:58:41.172392 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.98s 2025-07-12 15:58:41.172401 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.98s 2025-07-12 15:58:41.172410 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.85s 2025-07-12 15:58:41.172420 | orchestrator | neutron : Check neutron containers -------------------------------------- 3.83s 2025-07-12 15:58:41.172429 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.81s 2025-07-12 15:58:41.172438 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.63s 2025-07-12 15:58:41.172447 | orchestrator | neutron : Copying over dnsmasq.conf ------------------------------------- 3.63s 2025-07-12 15:58:41.172457 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 3.60s 2025-07-12 15:58:41.172466 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 3.50s 2025-07-12 15:58:41.172475 | orchestrator | neutron : Copying over openvswitch_agent.ini ---------------------------- 3.40s 2025-07-12 15:58:41.172485 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.33s 2025-07-12 15:58:41.172494 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.24s 2025-07-12 15:58:41.172503 | orchestrator | neutron : Copying over l3_agent.ini ------------------------------------- 3.21s 2025-07-12 15:58:41.172513 | orchestrator | 2025-07-12 15:58:41 | INFO  | Task 4127e268-a20f-41f9-bad8-bf5bc87ca0d9 is in state STARTED 2025-07-12 15:58:41.172522 | orchestrator | 2025-07-12 15:58:41 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 15:58:41.172532 | orchestrator | 2025-07-12 15:58:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:58:44.204006 | orchestrator | 2025-07-12 15:58:44 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 15:58:44.206994 | orchestrator | 2025-07-12 15:58:44 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:58:44.208813 | orchestrator | 2025-07-12 15:58:44 | INFO  | Task 4127e268-a20f-41f9-bad8-bf5bc87ca0d9 is in state STARTED 2025-07-12 15:58:44.210580 | orchestrator | 2025-07-12 15:58:44 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 15:58:44.210613 | orchestrator | 2025-07-12 15:58:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:58:47.239798 | orchestrator | 2025-07-12 15:58:47 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 15:58:47.240413 | orchestrator | 2025-07-12 15:58:47 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 15:58:47.243179 | orchestrator | 2025-07-12 15:58:47 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:58:47.243536 | orchestrator | 2025-07-12 15:58:47 | INFO  | Task 4127e268-a20f-41f9-bad8-bf5bc87ca0d9 is in state SUCCESS 2025-07-12 15:58:47.244411 | orchestrator | 2025-07-12 15:58:47 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 15:58:47.244436 | orchestrator | 2025-07-12 15:58:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:58:50.299553 | orchestrator | 2025-07-12 15:58:50 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 15:58:50.299656 | orchestrator | 2025-07-12 15:58:50 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 15:58:50.300460 | orchestrator | 2025-07-12 15:58:50 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:58:50.303459 | orchestrator | 2025-07-12 15:58:50 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 15:58:50.303508 | orchestrator | 2025-07-12 15:58:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:58:53.352610 | orchestrator | 2025-07-12 15:58:53 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 15:58:53.356530 | orchestrator | 2025-07-12 15:58:53 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 15:58:53.362218 | orchestrator | 2025-07-12 15:58:53 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:58:53.364906 | orchestrator | 2025-07-12 15:58:53 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 15:58:53.365276 | orchestrator | 2025-07-12 15:58:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:58:56.409785 | orchestrator | 2025-07-12 15:58:56 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 15:58:56.411648 | orchestrator | 2025-07-12 15:58:56 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 15:58:56.413463 | orchestrator | 2025-07-12 15:58:56 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:58:56.414771 | orchestrator | 2025-07-12 15:58:56 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 15:58:56.414975 | orchestrator | 2025-07-12 15:58:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:58:59.449744 | orchestrator | 2025-07-12 15:58:59 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 15:58:59.449852 | orchestrator | 2025-07-12 15:58:59 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 15:58:59.450800 | orchestrator | 2025-07-12 15:58:59 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:58:59.452012 | orchestrator | 2025-07-12 15:58:59 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 15:58:59.452038 | orchestrator | 2025-07-12 15:58:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:59:02.493506 | orchestrator | 2025-07-12 15:59:02 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 15:59:02.494827 | orchestrator | 2025-07-12 15:59:02 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 15:59:02.496759 | orchestrator | 2025-07-12 15:59:02 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:59:02.498372 | orchestrator | 2025-07-12 15:59:02 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 15:59:02.498399 | orchestrator | 2025-07-12 15:59:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:59:05.537640 | orchestrator | 2025-07-12 15:59:05 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 15:59:05.538122 | orchestrator | 2025-07-12 15:59:05 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 15:59:05.539209 | orchestrator | 2025-07-12 15:59:05 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:59:05.539697 | orchestrator | 2025-07-12 15:59:05 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 15:59:05.539825 | orchestrator | 2025-07-12 15:59:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:59:08.577203 | orchestrator | 2025-07-12 15:59:08 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 15:59:08.578168 | orchestrator | 2025-07-12 15:59:08 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 15:59:08.580119 | orchestrator | 2025-07-12 15:59:08 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:59:08.581256 | orchestrator | 2025-07-12 15:59:08 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 15:59:08.581312 | orchestrator | 2025-07-12 15:59:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:59:11.627541 | orchestrator | 2025-07-12 15:59:11 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 15:59:11.628378 | orchestrator | 2025-07-12 15:59:11 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 15:59:11.630123 | orchestrator | 2025-07-12 15:59:11 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:59:11.631690 | orchestrator | 2025-07-12 15:59:11 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 15:59:11.631866 | orchestrator | 2025-07-12 15:59:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:59:14.671227 | orchestrator | 2025-07-12 15:59:14 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 15:59:14.671473 | orchestrator | 2025-07-12 15:59:14 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 15:59:14.672495 | orchestrator | 2025-07-12 15:59:14 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:59:14.674194 | orchestrator | 2025-07-12 15:59:14 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 15:59:14.674227 | orchestrator | 2025-07-12 15:59:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:59:17.717690 | orchestrator | 2025-07-12 15:59:17 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 15:59:17.723001 | orchestrator | 2025-07-12 15:59:17 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 15:59:17.723919 | orchestrator | 2025-07-12 15:59:17 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:59:17.728008 | orchestrator | 2025-07-12 15:59:17 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 15:59:17.729124 | orchestrator | 2025-07-12 15:59:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:59:20.770978 | orchestrator | 2025-07-12 15:59:20 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 15:59:20.772457 | orchestrator | 2025-07-12 15:59:20 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 15:59:20.774130 | orchestrator | 2025-07-12 15:59:20 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:59:20.776141 | orchestrator | 2025-07-12 15:59:20 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 15:59:20.776279 | orchestrator | 2025-07-12 15:59:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:59:23.812603 | orchestrator | 2025-07-12 15:59:23 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 15:59:23.813482 | orchestrator | 2025-07-12 15:59:23 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 15:59:23.814812 | orchestrator | 2025-07-12 15:59:23 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:59:23.816323 | orchestrator | 2025-07-12 15:59:23 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 15:59:23.816644 | orchestrator | 2025-07-12 15:59:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:59:26.865128 | orchestrator | 2025-07-12 15:59:26 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 15:59:26.868667 | orchestrator | 2025-07-12 15:59:26 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 15:59:26.872146 | orchestrator | 2025-07-12 15:59:26 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:59:26.876871 | orchestrator | 2025-07-12 15:59:26 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 15:59:26.876896 | orchestrator | 2025-07-12 15:59:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:59:29.907949 | orchestrator | 2025-07-12 15:59:29 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 15:59:29.909633 | orchestrator | 2025-07-12 15:59:29 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 15:59:29.910799 | orchestrator | 2025-07-12 15:59:29 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:59:29.912317 | orchestrator | 2025-07-12 15:59:29 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 15:59:29.912954 | orchestrator | 2025-07-12 15:59:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:59:32.952158 | orchestrator | 2025-07-12 15:59:32 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 15:59:32.953147 | orchestrator | 2025-07-12 15:59:32 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 15:59:32.954742 | orchestrator | 2025-07-12 15:59:32 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:59:32.955955 | orchestrator | 2025-07-12 15:59:32 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 15:59:32.956170 | orchestrator | 2025-07-12 15:59:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:59:35.987759 | orchestrator | 2025-07-12 15:59:35 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 15:59:35.988153 | orchestrator | 2025-07-12 15:59:35 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 15:59:35.989991 | orchestrator | 2025-07-12 15:59:35 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:59:35.992260 | orchestrator | 2025-07-12 15:59:35 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 15:59:35.992287 | orchestrator | 2025-07-12 15:59:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:59:39.040087 | orchestrator | 2025-07-12 15:59:39 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 15:59:39.040224 | orchestrator | 2025-07-12 15:59:39 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 15:59:39.042551 | orchestrator | 2025-07-12 15:59:39 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:59:39.042664 | orchestrator | 2025-07-12 15:59:39 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 15:59:39.042675 | orchestrator | 2025-07-12 15:59:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:59:42.082576 | orchestrator | 2025-07-12 15:59:42 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 15:59:42.084458 | orchestrator | 2025-07-12 15:59:42 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 15:59:42.086357 | orchestrator | 2025-07-12 15:59:42 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:59:42.087582 | orchestrator | 2025-07-12 15:59:42 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 15:59:42.087603 | orchestrator | 2025-07-12 15:59:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:59:45.123361 | orchestrator | 2025-07-12 15:59:45 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 15:59:45.123796 | orchestrator | 2025-07-12 15:59:45 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 15:59:45.127356 | orchestrator | 2025-07-12 15:59:45 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:59:45.128709 | orchestrator | 2025-07-12 15:59:45 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 15:59:45.128730 | orchestrator | 2025-07-12 15:59:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:59:48.182336 | orchestrator | 2025-07-12 15:59:48 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 15:59:48.183010 | orchestrator | 2025-07-12 15:59:48 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 15:59:48.184763 | orchestrator | 2025-07-12 15:59:48 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:59:48.186618 | orchestrator | 2025-07-12 15:59:48 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 15:59:48.187052 | orchestrator | 2025-07-12 15:59:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:59:51.225659 | orchestrator | 2025-07-12 15:59:51 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 15:59:51.226884 | orchestrator | 2025-07-12 15:59:51 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 15:59:51.229470 | orchestrator | 2025-07-12 15:59:51 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:59:51.231801 | orchestrator | 2025-07-12 15:59:51 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 15:59:51.231917 | orchestrator | 2025-07-12 15:59:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:59:54.278793 | orchestrator | 2025-07-12 15:59:54 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 15:59:54.280881 | orchestrator | 2025-07-12 15:59:54 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 15:59:54.283587 | orchestrator | 2025-07-12 15:59:54 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:59:54.284727 | orchestrator | 2025-07-12 15:59:54 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 15:59:54.284987 | orchestrator | 2025-07-12 15:59:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 15:59:57.329884 | orchestrator | 2025-07-12 15:59:57 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 15:59:57.332451 | orchestrator | 2025-07-12 15:59:57 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 15:59:57.333050 | orchestrator | 2025-07-12 15:59:57 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 15:59:57.334368 | orchestrator | 2025-07-12 15:59:57 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 15:59:57.334428 | orchestrator | 2025-07-12 15:59:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:00:00.374882 | orchestrator | 2025-07-12 16:00:00 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:00:00.375375 | orchestrator | 2025-07-12 16:00:00 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 16:00:00.375994 | orchestrator | 2025-07-12 16:00:00 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 16:00:00.377263 | orchestrator | 2025-07-12 16:00:00 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 16:00:00.377353 | orchestrator | 2025-07-12 16:00:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:00:03.414356 | orchestrator | 2025-07-12 16:00:03 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:00:03.415561 | orchestrator | 2025-07-12 16:00:03 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 16:00:03.417354 | orchestrator | 2025-07-12 16:00:03 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 16:00:03.418733 | orchestrator | 2025-07-12 16:00:03 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 16:00:03.418755 | orchestrator | 2025-07-12 16:00:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:00:06.456719 | orchestrator | 2025-07-12 16:00:06 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:00:06.456837 | orchestrator | 2025-07-12 16:00:06 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 16:00:06.458248 | orchestrator | 2025-07-12 16:00:06 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 16:00:06.460438 | orchestrator | 2025-07-12 16:00:06 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 16:00:06.460677 | orchestrator | 2025-07-12 16:00:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:00:09.503068 | orchestrator | 2025-07-12 16:00:09 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:00:09.504773 | orchestrator | 2025-07-12 16:00:09 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 16:00:09.506528 | orchestrator | 2025-07-12 16:00:09 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 16:00:09.508318 | orchestrator | 2025-07-12 16:00:09 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 16:00:09.508571 | orchestrator | 2025-07-12 16:00:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:00:12.551693 | orchestrator | 2025-07-12 16:00:12 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:00:12.553382 | orchestrator | 2025-07-12 16:00:12 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 16:00:12.554649 | orchestrator | 2025-07-12 16:00:12 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 16:00:12.555834 | orchestrator | 2025-07-12 16:00:12 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 16:00:12.555896 | orchestrator | 2025-07-12 16:00:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:00:15.602809 | orchestrator | 2025-07-12 16:00:15 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:00:15.604182 | orchestrator | 2025-07-12 16:00:15 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 16:00:15.605907 | orchestrator | 2025-07-12 16:00:15 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 16:00:15.607535 | orchestrator | 2025-07-12 16:00:15 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 16:00:15.607562 | orchestrator | 2025-07-12 16:00:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:00:18.646236 | orchestrator | 2025-07-12 16:00:18 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:00:18.649594 | orchestrator | 2025-07-12 16:00:18 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 16:00:18.649635 | orchestrator | 2025-07-12 16:00:18 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 16:00:18.649648 | orchestrator | 2025-07-12 16:00:18 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 16:00:18.649660 | orchestrator | 2025-07-12 16:00:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:00:21.688284 | orchestrator | 2025-07-12 16:00:21 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:00:21.689152 | orchestrator | 2025-07-12 16:00:21 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 16:00:21.690178 | orchestrator | 2025-07-12 16:00:21 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 16:00:21.691253 | orchestrator | 2025-07-12 16:00:21 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 16:00:21.691292 | orchestrator | 2025-07-12 16:00:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:00:24.721531 | orchestrator | 2025-07-12 16:00:24 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:00:24.721617 | orchestrator | 2025-07-12 16:00:24 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 16:00:24.723919 | orchestrator | 2025-07-12 16:00:24 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 16:00:24.724744 | orchestrator | 2025-07-12 16:00:24 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 16:00:24.724892 | orchestrator | 2025-07-12 16:00:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:00:27.759682 | orchestrator | 2025-07-12 16:00:27 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:00:27.761503 | orchestrator | 2025-07-12 16:00:27 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 16:00:27.762882 | orchestrator | 2025-07-12 16:00:27 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 16:00:27.763933 | orchestrator | 2025-07-12 16:00:27 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 16:00:27.764210 | orchestrator | 2025-07-12 16:00:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:00:30.786296 | orchestrator | 2025-07-12 16:00:30 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:00:30.786914 | orchestrator | 2025-07-12 16:00:30 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 16:00:30.787167 | orchestrator | 2025-07-12 16:00:30 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 16:00:30.787723 | orchestrator | 2025-07-12 16:00:30 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 16:00:30.787803 | orchestrator | 2025-07-12 16:00:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:00:33.827313 | orchestrator | 2025-07-12 16:00:33 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:00:33.828217 | orchestrator | 2025-07-12 16:00:33 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 16:00:33.830112 | orchestrator | 2025-07-12 16:00:33 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 16:00:33.831968 | orchestrator | 2025-07-12 16:00:33 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 16:00:33.832050 | orchestrator | 2025-07-12 16:00:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:00:36.873577 | orchestrator | 2025-07-12 16:00:36 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:00:36.875738 | orchestrator | 2025-07-12 16:00:36 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 16:00:36.877908 | orchestrator | 2025-07-12 16:00:36 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 16:00:36.880037 | orchestrator | 2025-07-12 16:00:36 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state STARTED 2025-07-12 16:00:36.880062 | orchestrator | 2025-07-12 16:00:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:00:39.926925 | orchestrator | 2025-07-12 16:00:39 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:00:39.928175 | orchestrator | 2025-07-12 16:00:39 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 16:00:39.930400 | orchestrator | 2025-07-12 16:00:39 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 16:00:39.933574 | orchestrator | 2025-07-12 16:00:39 | INFO  | Task 2e90d76b-c572-4062-90f6-a679c3ddc9f8 is in state SUCCESS 2025-07-12 16:00:39.935332 | orchestrator | 2025-07-12 16:00:39.935488 | orchestrator | 2025-07-12 16:00:39.935506 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 16:00:39.935519 | orchestrator | 2025-07-12 16:00:39.935530 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 16:00:39.935542 | orchestrator | Saturday 12 July 2025 15:58:43 +0000 (0:00:00.128) 0:00:00.128 ********* 2025-07-12 16:00:39.935553 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:00:39.935565 | orchestrator | ok: [testbed-node-1] 2025-07-12 16:00:39.935575 | orchestrator | ok: [testbed-node-2] 2025-07-12 16:00:39.935586 | orchestrator | 2025-07-12 16:00:39.935598 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 16:00:39.935608 | orchestrator | Saturday 12 July 2025 15:58:43 +0000 (0:00:00.213) 0:00:00.341 ********* 2025-07-12 16:00:39.935620 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-07-12 16:00:39.935631 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-07-12 16:00:39.935642 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-07-12 16:00:39.935653 | orchestrator | 2025-07-12 16:00:39.935664 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-07-12 16:00:39.935674 | orchestrator | 2025-07-12 16:00:39.935685 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-07-12 16:00:39.935696 | orchestrator | Saturday 12 July 2025 15:58:43 +0000 (0:00:00.405) 0:00:00.747 ********* 2025-07-12 16:00:39.935707 | orchestrator | ok: [testbed-node-1] 2025-07-12 16:00:39.935718 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:00:39.935729 | orchestrator | ok: [testbed-node-2] 2025-07-12 16:00:39.935739 | orchestrator | 2025-07-12 16:00:39.935750 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 16:00:39.935761 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 16:00:39.935774 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 16:00:39.935808 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 16:00:39.935819 | orchestrator | 2025-07-12 16:00:39.935830 | orchestrator | 2025-07-12 16:00:39.935841 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 16:00:39.935851 | orchestrator | Saturday 12 July 2025 15:58:44 +0000 (0:00:00.845) 0:00:01.592 ********* 2025-07-12 16:00:39.935862 | orchestrator | =============================================================================== 2025-07-12 16:00:39.935872 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.85s 2025-07-12 16:00:39.935883 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.41s 2025-07-12 16:00:39.935894 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.21s 2025-07-12 16:00:39.935905 | orchestrator | 2025-07-12 16:00:39.935918 | orchestrator | 2025-07-12 16:00:39.935931 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 16:00:39.935961 | orchestrator | 2025-07-12 16:00:39.935974 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 16:00:39.936005 | orchestrator | Saturday 12 July 2025 15:58:38 +0000 (0:00:00.198) 0:00:00.198 ********* 2025-07-12 16:00:39.936018 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:00:39.936030 | orchestrator | ok: [testbed-node-1] 2025-07-12 16:00:39.936042 | orchestrator | ok: [testbed-node-2] 2025-07-12 16:00:39.936055 | orchestrator | 2025-07-12 16:00:39.936068 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 16:00:39.936081 | orchestrator | Saturday 12 July 2025 15:58:38 +0000 (0:00:00.238) 0:00:00.436 ********* 2025-07-12 16:00:39.936092 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-07-12 16:00:39.936103 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-07-12 16:00:39.936114 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-07-12 16:00:39.936124 | orchestrator | 2025-07-12 16:00:39.936135 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-07-12 16:00:39.936146 | orchestrator | 2025-07-12 16:00:39.936157 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-07-12 16:00:39.936168 | orchestrator | Saturday 12 July 2025 15:58:39 +0000 (0:00:00.415) 0:00:00.852 ********* 2025-07-12 16:00:39.936179 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 16:00:39.936189 | orchestrator | 2025-07-12 16:00:39.936200 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-07-12 16:00:39.936211 | orchestrator | Saturday 12 July 2025 15:58:39 +0000 (0:00:00.610) 0:00:01.463 ********* 2025-07-12 16:00:39.936222 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-07-12 16:00:39.936233 | orchestrator | 2025-07-12 16:00:39.936244 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-07-12 16:00:39.936254 | orchestrator | Saturday 12 July 2025 15:58:43 +0000 (0:00:04.123) 0:00:05.586 ********* 2025-07-12 16:00:39.936265 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-07-12 16:00:39.936276 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-07-12 16:00:39.936287 | orchestrator | 2025-07-12 16:00:39.936298 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-07-12 16:00:39.936309 | orchestrator | Saturday 12 July 2025 15:58:50 +0000 (0:00:06.826) 0:00:12.413 ********* 2025-07-12 16:00:39.936320 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 16:00:39.936331 | orchestrator | 2025-07-12 16:00:39.936342 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-07-12 16:00:39.936352 | orchestrator | Saturday 12 July 2025 15:58:54 +0000 (0:00:03.846) 0:00:16.259 ********* 2025-07-12 16:00:39.936379 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 16:00:39.936400 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-07-12 16:00:39.936412 | orchestrator | 2025-07-12 16:00:39.936423 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-07-12 16:00:39.936433 | orchestrator | Saturday 12 July 2025 15:58:58 +0000 (0:00:03.723) 0:00:19.983 ********* 2025-07-12 16:00:39.936444 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 16:00:39.936455 | orchestrator | 2025-07-12 16:00:39.936465 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-07-12 16:00:39.936477 | orchestrator | Saturday 12 July 2025 15:59:02 +0000 (0:00:03.878) 0:00:23.862 ********* 2025-07-12 16:00:39.936487 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-07-12 16:00:39.936498 | orchestrator | 2025-07-12 16:00:39.936509 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-07-12 16:00:39.936520 | orchestrator | Saturday 12 July 2025 15:59:06 +0000 (0:00:04.380) 0:00:28.242 ********* 2025-07-12 16:00:39.936530 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:00:39.936541 | orchestrator | 2025-07-12 16:00:39.936552 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-07-12 16:00:39.936563 | orchestrator | Saturday 12 July 2025 15:59:10 +0000 (0:00:03.685) 0:00:31.928 ********* 2025-07-12 16:00:39.936573 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:00:39.936584 | orchestrator | 2025-07-12 16:00:39.936595 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-07-12 16:00:39.936606 | orchestrator | Saturday 12 July 2025 15:59:14 +0000 (0:00:04.120) 0:00:36.049 ********* 2025-07-12 16:00:39.936616 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:00:39.936627 | orchestrator | 2025-07-12 16:00:39.936638 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-07-12 16:00:39.936648 | orchestrator | Saturday 12 July 2025 15:59:18 +0000 (0:00:03.937) 0:00:39.986 ********* 2025-07-12 16:00:39.936662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 16:00:39.936678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 16:00:39.936689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 16:00:39.936716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 16:00:39.936730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 16:00:39.936742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 16:00:39.936753 | orchestrator | 2025-07-12 16:00:39.936764 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-07-12 16:00:39.936775 | orchestrator | Saturday 12 July 2025 15:59:19 +0000 (0:00:01.320) 0:00:41.307 ********* 2025-07-12 16:00:39.936786 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:00:39.936797 | orchestrator | 2025-07-12 16:00:39.936807 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-07-12 16:00:39.936818 | orchestrator | Saturday 12 July 2025 15:59:19 +0000 (0:00:00.128) 0:00:41.435 ********* 2025-07-12 16:00:39.936829 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:00:39.936840 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:00:39.936851 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:00:39.936861 | orchestrator | 2025-07-12 16:00:39.936872 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-07-12 16:00:39.936883 | orchestrator | Saturday 12 July 2025 15:59:20 +0000 (0:00:00.464) 0:00:41.900 ********* 2025-07-12 16:00:39.936900 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 16:00:39.936911 | orchestrator | 2025-07-12 16:00:39.936922 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-07-12 16:00:39.936933 | orchestrator | Saturday 12 July 2025 15:59:20 +0000 (0:00:00.838) 0:00:42.739 ********* 2025-07-12 16:00:39.936945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 16:00:39.936964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 16:00:39.936976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 16:00:39.937007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 16:00:39.937019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 16:00:39.937038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 16:00:39.937050 | orchestrator | 2025-07-12 16:00:39.937061 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-07-12 16:00:39.937078 | orchestrator | Saturday 12 July 2025 15:59:23 +0000 (0:00:02.402) 0:00:45.142 ********* 2025-07-12 16:00:39.937089 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:00:39.937100 | orchestrator | ok: [testbed-node-1] 2025-07-12 16:00:39.937111 | orchestrator | ok: [testbed-node-2] 2025-07-12 16:00:39.937122 | orchestrator | 2025-07-12 16:00:39.937133 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-07-12 16:00:39.937144 | orchestrator | Saturday 12 July 2025 15:59:23 +0000 (0:00:00.303) 0:00:45.445 ********* 2025-07-12 16:00:39.937154 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 16:00:39.937165 | orchestrator | 2025-07-12 16:00:39.937176 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-07-12 16:00:39.937187 | orchestrator | Saturday 12 July 2025 15:59:24 +0000 (0:00:00.701) 0:00:46.147 ********* 2025-07-12 16:00:39.937198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 16:00:39.937210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 16:00:39.937228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 16:00:39.937240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 16:00:39.937258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 16:00:39.937270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 16:00:39.937281 | orchestrator | 2025-07-12 16:00:39.937292 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-07-12 16:00:39.937303 | orchestrator | Saturday 12 July 2025 15:59:26 +0000 (0:00:02.445) 0:00:48.592 ********* 2025-07-12 16:00:39.937314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 16:00:39.937333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 16:00:39.937344 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:00:39.937362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 16:00:39.937375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 16:00:39.937386 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:00:39.937398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 16:00:39.937416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 16:00:39.937428 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:00:39.937439 | orchestrator | 2025-07-12 16:00:39.937450 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-07-12 16:00:39.937461 | orchestrator | Saturday 12 July 2025 15:59:27 +0000 (0:00:00.633) 0:00:49.226 ********* 2025-07-12 16:00:39.937472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 16:00:39.937492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 16:00:39.937504 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:00:39.937516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 16:00:39.937528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 16:00:39.937545 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:00:39.937556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 16:00:39.937568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 16:00:39.937579 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:00:39.937590 | orchestrator | 2025-07-12 16:00:39.937601 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-07-12 16:00:39.937611 | orchestrator | Saturday 12 July 2025 15:59:28 +0000 (0:00:01.213) 0:00:50.440 ********* 2025-07-12 16:00:39.937906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'conta2025-07-12 16:00:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:00:39.937930 | orchestrator | iner_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 16:00:39.937943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 16:00:39.937967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 16:00:39.938001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 16:00:39.938074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 16:00:39.938091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 16:00:39.938103 | orchestrator | 2025-07-12 16:00:39.938114 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-07-12 16:00:39.938133 | orchestrator | Saturday 12 July 2025 15:59:31 +0000 (0:00:02.366) 0:00:52.807 ********* 2025-07-12 16:00:39.938145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 16:00:39.938156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 16:00:39.938169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 16:00:39.938191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 16:00:39.938203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 16:00:39.938222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 16:00:39.938233 | orchestrator | 2025-07-12 16:00:39.938245 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-07-12 16:00:39.938257 | orchestrator | Saturday 12 July 2025 15:59:36 +0000 (0:00:05.179) 0:00:57.987 ********* 2025-07-12 16:00:39.938270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 16:00:39.938281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 16:00:39.938293 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:00:39.938314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 16:00:39.938333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 16:00:39.938345 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:00:39.938357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 16:00:39.938369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 16:00:39.938381 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:00:39.938392 | orchestrator | 2025-07-12 16:00:39.938403 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-07-12 16:00:39.938414 | orchestrator | Saturday 12 July 2025 15:59:37 +0000 (0:00:00.848) 0:00:58.835 ********* 2025-07-12 16:00:39.938433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 16:00:39.938445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 16:00:39.938466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 16:00:39.938478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 16:00:39.938490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 16:00:39.938509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 16:00:39.938533 | orchestrator | 2025-07-12 16:00:39.938547 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-07-12 16:00:39.938560 | orchestrator | Saturday 12 July 2025 15:59:39 +0000 (0:00:02.496) 0:01:01.332 ********* 2025-07-12 16:00:39.938574 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:00:39.938586 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:00:39.938599 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:00:39.938612 | orchestrator | 2025-07-12 16:00:39.938625 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-07-12 16:00:39.938637 | orchestrator | Saturday 12 July 2025 15:59:39 +0000 (0:00:00.280) 0:01:01.613 ********* 2025-07-12 16:00:39.938650 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:00:39.938662 | orchestrator | 2025-07-12 16:00:39.938675 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-07-12 16:00:39.938687 | orchestrator | Saturday 12 July 2025 15:59:42 +0000 (0:00:02.280) 0:01:03.894 ********* 2025-07-12 16:00:39.938700 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:00:39.938713 | orchestrator | 2025-07-12 16:00:39.938725 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-07-12 16:00:39.938738 | orchestrator | Saturday 12 July 2025 15:59:44 +0000 (0:00:02.424) 0:01:06.318 ********* 2025-07-12 16:00:39.938749 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:00:39.938761 | orchestrator | 2025-07-12 16:00:39.938774 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-07-12 16:00:39.938786 | orchestrator | Saturday 12 July 2025 16:00:03 +0000 (0:00:19.231) 0:01:25.550 ********* 2025-07-12 16:00:39.938798 | orchestrator | 2025-07-12 16:00:39.938811 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-07-12 16:00:39.938823 | orchestrator | Saturday 12 July 2025 16:00:03 +0000 (0:00:00.061) 0:01:25.611 ********* 2025-07-12 16:00:39.938835 | orchestrator | 2025-07-12 16:00:39.938848 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-07-12 16:00:39.938860 | orchestrator | Saturday 12 July 2025 16:00:03 +0000 (0:00:00.060) 0:01:25.671 ********* 2025-07-12 16:00:39.938873 | orchestrator | 2025-07-12 16:00:39.938885 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-07-12 16:00:39.938896 | orchestrator | Saturday 12 July 2025 16:00:03 +0000 (0:00:00.062) 0:01:25.734 ********* 2025-07-12 16:00:39.938907 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:00:39.938917 | orchestrator | changed: [testbed-node-2] 2025-07-12 16:00:39.938928 | orchestrator | changed: [testbed-node-1] 2025-07-12 16:00:39.938939 | orchestrator | 2025-07-12 16:00:39.938950 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-07-12 16:00:39.938961 | orchestrator | Saturday 12 July 2025 16:00:18 +0000 (0:00:14.597) 0:01:40.332 ********* 2025-07-12 16:00:39.938971 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:00:39.939013 | orchestrator | changed: [testbed-node-1] 2025-07-12 16:00:39.939025 | orchestrator | changed: [testbed-node-2] 2025-07-12 16:00:39.939036 | orchestrator | 2025-07-12 16:00:39.939047 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 16:00:39.939058 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 16:00:39.939069 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 16:00:39.939080 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 16:00:39.939091 | orchestrator | 2025-07-12 16:00:39.939102 | orchestrator | 2025-07-12 16:00:39.939112 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 16:00:39.939123 | orchestrator | Saturday 12 July 2025 16:00:38 +0000 (0:00:19.419) 0:01:59.751 ********* 2025-07-12 16:00:39.939134 | orchestrator | =============================================================================== 2025-07-12 16:00:39.939152 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 19.42s 2025-07-12 16:00:39.939162 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 19.23s 2025-07-12 16:00:39.939173 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 14.60s 2025-07-12 16:00:39.939184 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.83s 2025-07-12 16:00:39.939195 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.18s 2025-07-12 16:00:39.939205 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.38s 2025-07-12 16:00:39.939216 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 4.12s 2025-07-12 16:00:39.939227 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.12s 2025-07-12 16:00:39.939237 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.94s 2025-07-12 16:00:39.939248 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.88s 2025-07-12 16:00:39.939259 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.85s 2025-07-12 16:00:39.939270 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.72s 2025-07-12 16:00:39.939280 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.69s 2025-07-12 16:00:39.939291 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.50s 2025-07-12 16:00:39.939302 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.45s 2025-07-12 16:00:39.939319 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.42s 2025-07-12 16:00:39.939330 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.40s 2025-07-12 16:00:39.939341 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.37s 2025-07-12 16:00:39.939352 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.28s 2025-07-12 16:00:39.939363 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.32s 2025-07-12 16:00:42.978662 | orchestrator | 2025-07-12 16:00:42 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:00:42.979492 | orchestrator | 2025-07-12 16:00:42 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 16:00:42.980757 | orchestrator | 2025-07-12 16:00:42 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 16:00:42.980885 | orchestrator | 2025-07-12 16:00:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:00:46.042118 | orchestrator | 2025-07-12 16:00:46 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:00:46.043915 | orchestrator | 2025-07-12 16:00:46 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 16:00:46.045365 | orchestrator | 2025-07-12 16:00:46 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 16:00:46.045502 | orchestrator | 2025-07-12 16:00:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:00:49.090785 | orchestrator | 2025-07-12 16:00:49 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:00:49.092180 | orchestrator | 2025-07-12 16:00:49 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 16:00:49.094708 | orchestrator | 2025-07-12 16:00:49 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 16:00:49.094843 | orchestrator | 2025-07-12 16:00:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:00:52.132129 | orchestrator | 2025-07-12 16:00:52 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:00:52.133857 | orchestrator | 2025-07-12 16:00:52 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 16:00:52.136254 | orchestrator | 2025-07-12 16:00:52 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 16:00:52.136348 | orchestrator | 2025-07-12 16:00:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:00:55.192774 | orchestrator | 2025-07-12 16:00:55 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:00:55.200031 | orchestrator | 2025-07-12 16:00:55 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state STARTED 2025-07-12 16:00:55.203610 | orchestrator | 2025-07-12 16:00:55 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 16:00:55.203873 | orchestrator | 2025-07-12 16:00:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:00:58.251018 | orchestrator | 2025-07-12 16:00:58 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:00:58.254492 | orchestrator | 2025-07-12 16:00:58 | INFO  | Task ee072def-c793-4214-b3d3-c7d5d02b96ce is in state SUCCESS 2025-07-12 16:00:58.256368 | orchestrator | 2025-07-12 16:00:58.256412 | orchestrator | 2025-07-12 16:00:58.256425 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 16:00:58.256437 | orchestrator | 2025-07-12 16:00:58.256449 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 16:00:58.256461 | orchestrator | Saturday 12 July 2025 15:58:41 +0000 (0:00:00.231) 0:00:00.231 ********* 2025-07-12 16:00:58.256472 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:00:58.256484 | orchestrator | ok: [testbed-node-1] 2025-07-12 16:00:58.256495 | orchestrator | ok: [testbed-node-2] 2025-07-12 16:00:58.256506 | orchestrator | 2025-07-12 16:00:58.256518 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 16:00:58.256530 | orchestrator | Saturday 12 July 2025 15:58:41 +0000 (0:00:00.268) 0:00:00.500 ********* 2025-07-12 16:00:58.256541 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-07-12 16:00:58.256553 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-07-12 16:00:58.256564 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-07-12 16:00:58.256575 | orchestrator | 2025-07-12 16:00:58.256586 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-07-12 16:00:58.256598 | orchestrator | 2025-07-12 16:00:58.256609 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-07-12 16:00:58.256620 | orchestrator | Saturday 12 July 2025 15:58:42 +0000 (0:00:00.311) 0:00:00.811 ********* 2025-07-12 16:00:58.256694 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 16:00:58.256706 | orchestrator | 2025-07-12 16:00:58.256717 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-07-12 16:00:58.256728 | orchestrator | Saturday 12 July 2025 15:58:42 +0000 (0:00:00.473) 0:00:01.285 ********* 2025-07-12 16:00:58.256742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 16:00:58.256757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 16:00:58.256807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 16:00:58.256821 | orchestrator | 2025-07-12 16:00:58.256832 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-07-12 16:00:58.256883 | orchestrator | Saturday 12 July 2025 15:58:43 +0000 (0:00:00.770) 0:00:02.055 ********* 2025-07-12 16:00:58.256895 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-07-12 16:00:58.256994 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-07-12 16:00:58.257010 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 16:00:58.257032 | orchestrator | 2025-07-12 16:00:58.257046 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-07-12 16:00:58.257058 | orchestrator | Saturday 12 July 2025 15:58:44 +0000 (0:00:00.738) 0:00:02.794 ********* 2025-07-12 16:00:58.257070 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 16:00:58.257081 | orchestrator | 2025-07-12 16:00:58.257091 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-07-12 16:00:58.257102 | orchestrator | Saturday 12 July 2025 15:58:44 +0000 (0:00:00.672) 0:00:03.466 ********* 2025-07-12 16:00:58.257128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 16:00:58.257141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 16:00:58.257153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 16:00:58.257173 | orchestrator | 2025-07-12 16:00:58.257184 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-07-12 16:00:58.257195 | orchestrator | Saturday 12 July 2025 15:58:46 +0000 (0:00:01.402) 0:00:04.868 ********* 2025-07-12 16:00:58.257206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 16:00:58.257217 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:00:58.257229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 16:00:58.257240 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:00:58.257260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 16:00:58.257271 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:00:58.257282 | orchestrator | 2025-07-12 16:00:58.257293 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-07-12 16:00:58.257303 | orchestrator | Saturday 12 July 2025 15:58:46 +0000 (0:00:00.324) 0:00:05.193 ********* 2025-07-12 16:00:58.257314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 16:00:58.257333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 16:00:58.257344 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:00:58.257355 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:00:58.257366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 16:00:58.257377 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:00:58.257387 | orchestrator | 2025-07-12 16:00:58.257398 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-07-12 16:00:58.257409 | orchestrator | Saturday 12 July 2025 15:58:47 +0000 (0:00:00.678) 0:00:05.872 ********* 2025-07-12 16:00:58.257419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 16:00:58.257437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 16:00:58.257449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 16:00:58.257460 | orchestrator | 2025-07-12 16:00:58.257471 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-07-12 16:00:58.257488 | orchestrator | Saturday 12 July 2025 15:58:48 +0000 (0:00:01.175) 0:00:07.048 ********* 2025-07-12 16:00:58.257500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 16:00:58.257511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 16:00:58.257523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 16:00:58.257534 | orchestrator | 2025-07-12 16:00:58.257545 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-07-12 16:00:58.257556 | orchestrator | Saturday 12 July 2025 15:58:49 +0000 (0:00:01.372) 0:00:08.420 ********* 2025-07-12 16:00:58.257566 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:00:58.257577 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:00:58.257588 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:00:58.257598 | orchestrator | 2025-07-12 16:00:58.257609 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-07-12 16:00:58.257619 | orchestrator | Saturday 12 July 2025 15:58:50 +0000 (0:00:00.472) 0:00:08.892 ********* 2025-07-12 16:00:58.257630 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-07-12 16:00:58.257641 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-07-12 16:00:58.257652 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-07-12 16:00:58.257662 | orchestrator | 2025-07-12 16:00:58.257672 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-07-12 16:00:58.257683 | orchestrator | Saturday 12 July 2025 15:58:51 +0000 (0:00:01.388) 0:00:10.281 ********* 2025-07-12 16:00:58.257694 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-07-12 16:00:58.257710 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-07-12 16:00:58.257721 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-07-12 16:00:58.257738 | orchestrator | 2025-07-12 16:00:58.257749 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-07-12 16:00:58.257760 | orchestrator | Saturday 12 July 2025 15:58:52 +0000 (0:00:01.325) 0:00:11.607 ********* 2025-07-12 16:00:58.257770 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 16:00:58.257781 | orchestrator | 2025-07-12 16:00:58.257791 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-07-12 16:00:58.257802 | orchestrator | Saturday 12 July 2025 15:58:53 +0000 (0:00:00.700) 0:00:12.307 ********* 2025-07-12 16:00:58.257812 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-07-12 16:00:58.257823 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-07-12 16:00:58.257833 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:00:58.257844 | orchestrator | ok: [testbed-node-1] 2025-07-12 16:00:58.257854 | orchestrator | ok: [testbed-node-2] 2025-07-12 16:00:58.257865 | orchestrator | 2025-07-12 16:00:58.257875 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-07-12 16:00:58.257886 | orchestrator | Saturday 12 July 2025 15:58:54 +0000 (0:00:00.799) 0:00:13.107 ********* 2025-07-12 16:00:58.257897 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:00:58.257907 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:00:58.257918 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:00:58.257946 | orchestrator | 2025-07-12 16:00:58.257957 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-07-12 16:00:58.257967 | orchestrator | Saturday 12 July 2025 15:58:55 +0000 (0:00:00.592) 0:00:13.700 ********* 2025-07-12 16:00:58.257979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1100304, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.0103858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.257991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1100304, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.0103858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.258003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1100304, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.0103858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.258060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1100270, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.9773853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.258091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1100270, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.9773853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.258104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1100270, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.9773853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.258116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1100211, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.954385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.258127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1100211, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.954385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.258138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1100211, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.954385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.258150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1100296, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.9833856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.258182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1100296, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.9833856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.258194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1100296, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.9833856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.258206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1100164, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.938385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.258217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1100164, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.938385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.258228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1100164, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.938385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.258240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1100228, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.9583852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.258258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1100228, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.9583852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.258276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1100228, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.9583852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.258288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1100291, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.9823854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.258300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1100291, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.9823854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.258311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1100291, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.9823854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.258323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1100163, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.9373848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.258341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1100163, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.9373848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.258827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1100163, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.9373848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.258953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1100090, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.9133847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.258971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1100090, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.9133847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.258984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1100090, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.9133847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.258996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1100170, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.941385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1100170, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.941385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1100170, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.941385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1100102, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.9173846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1100102, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.9173846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1100271, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.9823854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1100102, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.9173846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1100271, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.9823854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1100172, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.944385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1100271, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.9823854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1100172, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.944385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1100300, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.9833856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1100172, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.944385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1100300, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.9833856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1100160, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.9353848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1100300, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.9833856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1100160, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.9353848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1100232, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.9763854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1100160, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.9353848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1100232, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.9763854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1100092, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.9153845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1100232, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.9763854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1100092, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.9153845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1100108, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.9343848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1100092, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.9153845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1100108, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.9343848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1100184, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.950385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1100108, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.9343848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1100184, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.950385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1100483, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.1493876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1100184, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332940.950385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1100483, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.1493876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1100475, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.028386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1100483, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.1493876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1100475, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.028386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1100425, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.0103858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1100425, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.0103858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1100475, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.028386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1101657, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.34339, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1101657, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.34339, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1100425, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.0103858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1100428, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.011386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1100428, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.011386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1101657, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.34339, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1100987, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.33939, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1100987, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.33939, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1100428, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.011386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1101670, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.3743904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1101670, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.3743904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1100981, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.1503875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1100987, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.33939, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1100981, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.1503875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1100985, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.1513877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1101670, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.3743904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1100985, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.1513877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1100431, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.0133858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1100981, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.1503875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1100431, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.0133858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1100476, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.028386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1100985, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.1513877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1100476, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.028386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1101773, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.3753905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1100431, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.0133858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1101773, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.3753905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.259991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1101654, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.34039, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.260010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1100476, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.028386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.260027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1101654, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.34039, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.260046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1100439, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.017386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.260058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1101773, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.3753905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.260070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1100439, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.017386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.260081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1100435, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.0133858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.260099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1101654, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.34039, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.260111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1100435, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.0133858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.260133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1100448, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.017386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.260145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1100439, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.017386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.260157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1100453, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.027386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.260168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1100448, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.017386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.260180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1100435, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.0133858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.260198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1100453, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.027386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.260221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1100479, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.029386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.260233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1100448, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.017386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.260244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1100984, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.1513877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.260256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1100479, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.029386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.260268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1100453, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.027386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.260287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1100481, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.029386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.260310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1100984, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.1513877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.260328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1101997, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4293911, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.260347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1100479, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.029386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.260367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1100481, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.029386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.260389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1100984, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.1513877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.260418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1101997, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4293911, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.260449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1100481, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.029386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.260462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1101997, 'dev': 118, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752332941.4293911, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 16:00:58.260474 | orchestrator | 2025-07-12 16:00:58.260487 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-07-12 16:00:58.260499 | orchestrator | Saturday 12 July 2025 15:59:34 +0000 (0:00:39.268) 0:00:52.969 ********* 2025-07-12 16:00:58.260511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 16:00:58.260523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 16:00:58.260534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 16:00:58.260545 | orchestrator | 2025-07-12 16:00:58.260635 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-07-12 16:00:58.260647 | orchestrator | Saturday 12 July 2025 15:59:35 +0000 (0:00:01.092) 0:00:54.061 ********* 2025-07-12 16:00:58.260659 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:00:58.260678 | orchestrator | 2025-07-12 16:00:58.260690 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-07-12 16:00:58.260708 | orchestrator | Saturday 12 July 2025 15:59:38 +0000 (0:00:02.720) 0:00:56.782 ********* 2025-07-12 16:00:58.260720 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:00:58.260730 | orchestrator | 2025-07-12 16:00:58.260741 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-07-12 16:00:58.260752 | orchestrator | Saturday 12 July 2025 15:59:40 +0000 (0:00:02.449) 0:00:59.231 ********* 2025-07-12 16:00:58.260762 | orchestrator | 2025-07-12 16:00:58.260773 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-07-12 16:00:58.260784 | orchestrator | Saturday 12 July 2025 15:59:40 +0000 (0:00:00.190) 0:00:59.422 ********* 2025-07-12 16:00:58.260794 | orchestrator | 2025-07-12 16:00:58.260805 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-07-12 16:00:58.260816 | orchestrator | Saturday 12 July 2025 15:59:40 +0000 (0:00:00.059) 0:00:59.482 ********* 2025-07-12 16:00:58.260826 | orchestrator | 2025-07-12 16:00:58.260837 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-07-12 16:00:58.260848 | orchestrator | Saturday 12 July 2025 15:59:40 +0000 (0:00:00.061) 0:00:59.543 ********* 2025-07-12 16:00:58.260859 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:00:58.260876 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:00:58.260887 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:00:58.260898 | orchestrator | 2025-07-12 16:00:58.260908 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-07-12 16:00:58.260936 | orchestrator | Saturday 12 July 2025 15:59:42 +0000 (0:00:01.857) 0:01:01.401 ********* 2025-07-12 16:00:58.260947 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:00:58.260958 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:00:58.260969 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-07-12 16:00:58.260980 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-07-12 16:00:58.260991 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-07-12 16:00:58.261002 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:00:58.261013 | orchestrator | 2025-07-12 16:00:58.261023 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-07-12 16:00:58.261034 | orchestrator | Saturday 12 July 2025 16:00:22 +0000 (0:00:39.397) 0:01:40.798 ********* 2025-07-12 16:00:58.261045 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:00:58.261056 | orchestrator | changed: [testbed-node-1] 2025-07-12 16:00:58.261067 | orchestrator | changed: [testbed-node-2] 2025-07-12 16:00:58.261078 | orchestrator | 2025-07-12 16:00:58.261089 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-07-12 16:00:58.261100 | orchestrator | Saturday 12 July 2025 16:00:51 +0000 (0:00:29.699) 0:02:10.498 ********* 2025-07-12 16:00:58.261111 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:00:58.261121 | orchestrator | 2025-07-12 16:00:58.261132 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-07-12 16:00:58.261143 | orchestrator | Saturday 12 July 2025 16:00:54 +0000 (0:00:02.470) 0:02:12.969 ********* 2025-07-12 16:00:58.261154 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:00:58.261164 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:00:58.261175 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:00:58.261186 | orchestrator | 2025-07-12 16:00:58.261196 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-07-12 16:00:58.261207 | orchestrator | Saturday 12 July 2025 16:00:54 +0000 (0:00:00.293) 0:02:13.262 ********* 2025-07-12 16:00:58.261219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-07-12 16:00:58.261239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-07-12 16:00:58.261251 | orchestrator | 2025-07-12 16:00:58.261262 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-07-12 16:00:58.261273 | orchestrator | Saturday 12 July 2025 16:00:57 +0000 (0:00:02.682) 0:02:15.945 ********* 2025-07-12 16:00:58.261284 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:00:58.261295 | orchestrator | 2025-07-12 16:00:58.261306 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 16:00:58.261317 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-12 16:00:58.261329 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-12 16:00:58.261340 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-12 16:00:58.261351 | orchestrator | 2025-07-12 16:00:58.261361 | orchestrator | 2025-07-12 16:00:58.261372 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 16:00:58.261383 | orchestrator | Saturday 12 July 2025 16:00:57 +0000 (0:00:00.263) 0:02:16.208 ********* 2025-07-12 16:00:58.261393 | orchestrator | =============================================================================== 2025-07-12 16:00:58.261404 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 39.40s 2025-07-12 16:00:58.261423 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 39.27s 2025-07-12 16:00:58.261435 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 29.70s 2025-07-12 16:00:58.261446 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.72s 2025-07-12 16:00:58.261457 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.68s 2025-07-12 16:00:58.261468 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.47s 2025-07-12 16:00:58.261478 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.45s 2025-07-12 16:00:58.261490 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.86s 2025-07-12 16:00:58.261500 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.40s 2025-07-12 16:00:58.261511 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.39s 2025-07-12 16:00:58.261530 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.37s 2025-07-12 16:00:58.261541 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.33s 2025-07-12 16:00:58.261552 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.18s 2025-07-12 16:00:58.261563 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.09s 2025-07-12 16:00:58.261574 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.80s 2025-07-12 16:00:58.261585 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.77s 2025-07-12 16:00:58.261596 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.74s 2025-07-12 16:00:58.261607 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.70s 2025-07-12 16:00:58.261617 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.68s 2025-07-12 16:00:58.261628 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.67s 2025-07-12 16:00:58.261639 | orchestrator | 2025-07-12 16:00:58 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 16:00:58.261710 | orchestrator | 2025-07-12 16:00:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:01:01.296400 | orchestrator | 2025-07-12 16:01:01 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:01:01.298848 | orchestrator | 2025-07-12 16:01:01 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 16:01:01.299022 | orchestrator | 2025-07-12 16:01:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:01:04.346821 | orchestrator | 2025-07-12 16:01:04 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:01:04.347437 | orchestrator | 2025-07-12 16:01:04 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state STARTED 2025-07-12 16:01:04.349346 | orchestrator | 2025-07-12 16:01:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:01:07.404449 | orchestrator | 2025-07-12 16:01:07 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:01:07.408871 | orchestrator | 2025-07-12 16:01:07 | INFO  | Task 96595211-5080-45eb-b7d4-e069ca7ce969 is in state SUCCESS 2025-07-12 16:01:07.411292 | orchestrator | 2025-07-12 16:01:07.411370 | orchestrator | 2025-07-12 16:01:07.411383 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 16:01:07.411395 | orchestrator | 2025-07-12 16:01:07.411406 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-07-12 16:01:07.411418 | orchestrator | Saturday 12 July 2025 15:52:18 +0000 (0:00:00.257) 0:00:00.257 ********* 2025-07-12 16:01:07.411430 | orchestrator | changed: [testbed-manager] 2025-07-12 16:01:07.411442 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:01:07.411453 | orchestrator | changed: [testbed-node-1] 2025-07-12 16:01:07.411464 | orchestrator | changed: [testbed-node-2] 2025-07-12 16:01:07.411475 | orchestrator | changed: [testbed-node-3] 2025-07-12 16:01:07.411485 | orchestrator | changed: [testbed-node-4] 2025-07-12 16:01:07.411496 | orchestrator | changed: [testbed-node-5] 2025-07-12 16:01:07.411507 | orchestrator | 2025-07-12 16:01:07.411518 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 16:01:07.411529 | orchestrator | Saturday 12 July 2025 15:52:19 +0000 (0:00:01.434) 0:00:01.691 ********* 2025-07-12 16:01:07.411539 | orchestrator | changed: [testbed-manager] 2025-07-12 16:01:07.411550 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:01:07.411561 | orchestrator | changed: [testbed-node-1] 2025-07-12 16:01:07.411572 | orchestrator | changed: [testbed-node-2] 2025-07-12 16:01:07.411582 | orchestrator | changed: [testbed-node-3] 2025-07-12 16:01:07.411593 | orchestrator | changed: [testbed-node-4] 2025-07-12 16:01:07.411605 | orchestrator | changed: [testbed-node-5] 2025-07-12 16:01:07.411616 | orchestrator | 2025-07-12 16:01:07.411627 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 16:01:07.411638 | orchestrator | Saturday 12 July 2025 15:52:20 +0000 (0:00:01.239) 0:00:02.930 ********* 2025-07-12 16:01:07.411649 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-07-12 16:01:07.411660 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-07-12 16:01:07.411671 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-07-12 16:01:07.411843 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-07-12 16:01:07.411860 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-07-12 16:01:07.411874 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-07-12 16:01:07.411886 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-07-12 16:01:07.411933 | orchestrator | 2025-07-12 16:01:07.411947 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-07-12 16:01:07.411959 | orchestrator | 2025-07-12 16:01:07.411972 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-07-12 16:01:07.412012 | orchestrator | Saturday 12 July 2025 15:52:22 +0000 (0:00:01.646) 0:00:04.577 ********* 2025-07-12 16:01:07.412025 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 16:01:07.412037 | orchestrator | 2025-07-12 16:01:07.412049 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-07-12 16:01:07.412062 | orchestrator | Saturday 12 July 2025 15:52:23 +0000 (0:00:01.214) 0:00:05.792 ********* 2025-07-12 16:01:07.412075 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-07-12 16:01:07.412089 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-07-12 16:01:07.412103 | orchestrator | 2025-07-12 16:01:07.412130 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-07-12 16:01:07.412143 | orchestrator | Saturday 12 July 2025 15:52:28 +0000 (0:00:04.801) 0:00:10.593 ********* 2025-07-12 16:01:07.412156 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 16:01:07.412170 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 16:01:07.412182 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:01:07.412193 | orchestrator | 2025-07-12 16:01:07.412203 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-07-12 16:01:07.412214 | orchestrator | Saturday 12 July 2025 15:52:33 +0000 (0:00:04.417) 0:00:15.011 ********* 2025-07-12 16:01:07.412225 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:01:07.412236 | orchestrator | 2025-07-12 16:01:07.412247 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-07-12 16:01:07.412257 | orchestrator | Saturday 12 July 2025 15:52:33 +0000 (0:00:00.745) 0:00:15.756 ********* 2025-07-12 16:01:07.412268 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:01:07.412279 | orchestrator | 2025-07-12 16:01:07.412290 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-07-12 16:01:07.412300 | orchestrator | Saturday 12 July 2025 15:52:35 +0000 (0:00:01.395) 0:00:17.151 ********* 2025-07-12 16:01:07.412311 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:01:07.412322 | orchestrator | 2025-07-12 16:01:07.412332 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-07-12 16:01:07.412343 | orchestrator | Saturday 12 July 2025 15:52:38 +0000 (0:00:02.949) 0:00:20.101 ********* 2025-07-12 16:01:07.412354 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.412365 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.412375 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.412386 | orchestrator | 2025-07-12 16:01:07.412397 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-07-12 16:01:07.412407 | orchestrator | Saturday 12 July 2025 15:52:38 +0000 (0:00:00.571) 0:00:20.673 ********* 2025-07-12 16:01:07.412418 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:01:07.412429 | orchestrator | 2025-07-12 16:01:07.412440 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-07-12 16:01:07.412451 | orchestrator | Saturday 12 July 2025 15:53:09 +0000 (0:00:30.728) 0:00:51.404 ********* 2025-07-12 16:01:07.412462 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:01:07.412473 | orchestrator | 2025-07-12 16:01:07.412484 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-07-12 16:01:07.412494 | orchestrator | Saturday 12 July 2025 15:53:23 +0000 (0:00:14.493) 0:01:05.897 ********* 2025-07-12 16:01:07.412524 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:01:07.412536 | orchestrator | 2025-07-12 16:01:07.412547 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-07-12 16:01:07.412558 | orchestrator | Saturday 12 July 2025 15:53:37 +0000 (0:00:13.319) 0:01:19.217 ********* 2025-07-12 16:01:07.412594 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:01:07.412606 | orchestrator | 2025-07-12 16:01:07.412617 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-07-12 16:01:07.412628 | orchestrator | Saturday 12 July 2025 15:53:38 +0000 (0:00:01.172) 0:01:20.390 ********* 2025-07-12 16:01:07.412647 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.412658 | orchestrator | 2025-07-12 16:01:07.412668 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-07-12 16:01:07.412679 | orchestrator | Saturday 12 July 2025 15:53:38 +0000 (0:00:00.472) 0:01:20.862 ********* 2025-07-12 16:01:07.412744 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 16:01:07.412755 | orchestrator | 2025-07-12 16:01:07.412766 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-07-12 16:01:07.412777 | orchestrator | Saturday 12 July 2025 15:53:39 +0000 (0:00:00.577) 0:01:21.440 ********* 2025-07-12 16:01:07.412788 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:01:07.412799 | orchestrator | 2025-07-12 16:01:07.412810 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-07-12 16:01:07.412821 | orchestrator | Saturday 12 July 2025 15:53:59 +0000 (0:00:20.336) 0:01:41.776 ********* 2025-07-12 16:01:07.412832 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.412842 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.412853 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.412864 | orchestrator | 2025-07-12 16:01:07.412875 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-07-12 16:01:07.412886 | orchestrator | 2025-07-12 16:01:07.412916 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-07-12 16:01:07.412927 | orchestrator | Saturday 12 July 2025 15:54:00 +0000 (0:00:00.303) 0:01:42.080 ********* 2025-07-12 16:01:07.412938 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 16:01:07.412949 | orchestrator | 2025-07-12 16:01:07.412960 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-07-12 16:01:07.412971 | orchestrator | Saturday 12 July 2025 15:54:00 +0000 (0:00:00.672) 0:01:42.752 ********* 2025-07-12 16:01:07.412982 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.412993 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.413004 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:01:07.413014 | orchestrator | 2025-07-12 16:01:07.413025 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-07-12 16:01:07.413036 | orchestrator | Saturday 12 July 2025 15:54:03 +0000 (0:00:02.266) 0:01:45.018 ********* 2025-07-12 16:01:07.413047 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.413058 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.413069 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:01:07.413080 | orchestrator | 2025-07-12 16:01:07.413091 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-07-12 16:01:07.413101 | orchestrator | Saturday 12 July 2025 15:54:05 +0000 (0:00:02.523) 0:01:47.542 ********* 2025-07-12 16:01:07.413112 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.413123 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.413134 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.413145 | orchestrator | 2025-07-12 16:01:07.413156 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-07-12 16:01:07.413172 | orchestrator | Saturday 12 July 2025 15:54:06 +0000 (0:00:00.706) 0:01:48.249 ********* 2025-07-12 16:01:07.413183 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-07-12 16:01:07.413194 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.413205 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-07-12 16:01:07.413216 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.413226 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-07-12 16:01:07.413238 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-07-12 16:01:07.413248 | orchestrator | 2025-07-12 16:01:07.413259 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-07-12 16:01:07.413270 | orchestrator | Saturday 12 July 2025 15:54:16 +0000 (0:00:09.772) 0:01:58.021 ********* 2025-07-12 16:01:07.413281 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.413299 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.413310 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.413321 | orchestrator | 2025-07-12 16:01:07.413331 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-07-12 16:01:07.413342 | orchestrator | Saturday 12 July 2025 15:54:16 +0000 (0:00:00.237) 0:01:58.259 ********* 2025-07-12 16:01:07.413353 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-07-12 16:01:07.413364 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.413375 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-07-12 16:01:07.413385 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.413396 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-07-12 16:01:07.413407 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.413418 | orchestrator | 2025-07-12 16:01:07.413428 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-07-12 16:01:07.413439 | orchestrator | Saturday 12 July 2025 15:54:16 +0000 (0:00:00.641) 0:01:58.901 ********* 2025-07-12 16:01:07.413450 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.413461 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.413472 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:01:07.413483 | orchestrator | 2025-07-12 16:01:07.413493 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-07-12 16:01:07.413504 | orchestrator | Saturday 12 July 2025 15:54:17 +0000 (0:00:00.529) 0:01:59.431 ********* 2025-07-12 16:01:07.413515 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.413526 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.413537 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:01:07.413547 | orchestrator | 2025-07-12 16:01:07.413558 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-07-12 16:01:07.413569 | orchestrator | Saturday 12 July 2025 15:54:18 +0000 (0:00:00.972) 0:02:00.403 ********* 2025-07-12 16:01:07.413580 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.413591 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.413609 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:01:07.413620 | orchestrator | 2025-07-12 16:01:07.413631 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-07-12 16:01:07.413642 | orchestrator | Saturday 12 July 2025 15:54:21 +0000 (0:00:02.758) 0:02:03.162 ********* 2025-07-12 16:01:07.413653 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.413663 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.413674 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:01:07.413685 | orchestrator | 2025-07-12 16:01:07.413696 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-07-12 16:01:07.413707 | orchestrator | Saturday 12 July 2025 15:54:42 +0000 (0:00:21.582) 0:02:24.744 ********* 2025-07-12 16:01:07.413718 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.413729 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.413739 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:01:07.413750 | orchestrator | 2025-07-12 16:01:07.413761 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-07-12 16:01:07.413772 | orchestrator | Saturday 12 July 2025 15:54:55 +0000 (0:00:12.729) 0:02:37.473 ********* 2025-07-12 16:01:07.413783 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:01:07.413793 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.413804 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.413815 | orchestrator | 2025-07-12 16:01:07.413826 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-07-12 16:01:07.413836 | orchestrator | Saturday 12 July 2025 15:54:56 +0000 (0:00:00.832) 0:02:38.306 ********* 2025-07-12 16:01:07.413847 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.413858 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.413869 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:01:07.413880 | orchestrator | 2025-07-12 16:01:07.413948 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-07-12 16:01:07.413969 | orchestrator | Saturday 12 July 2025 15:55:09 +0000 (0:00:12.966) 0:02:51.272 ********* 2025-07-12 16:01:07.413980 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.413991 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.414002 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.414012 | orchestrator | 2025-07-12 16:01:07.414076 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-07-12 16:01:07.414088 | orchestrator | Saturday 12 July 2025 15:55:10 +0000 (0:00:01.550) 0:02:52.823 ********* 2025-07-12 16:01:07.414099 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.414109 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.414120 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.414131 | orchestrator | 2025-07-12 16:01:07.414142 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-07-12 16:01:07.414153 | orchestrator | 2025-07-12 16:01:07.414163 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-07-12 16:01:07.414174 | orchestrator | Saturday 12 July 2025 15:55:11 +0000 (0:00:00.316) 0:02:53.139 ********* 2025-07-12 16:01:07.414185 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 16:01:07.414197 | orchestrator | 2025-07-12 16:01:07.414208 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-07-12 16:01:07.414219 | orchestrator | Saturday 12 July 2025 15:55:11 +0000 (0:00:00.518) 0:02:53.658 ********* 2025-07-12 16:01:07.414236 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-07-12 16:01:07.414247 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-07-12 16:01:07.414258 | orchestrator | 2025-07-12 16:01:07.414269 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-07-12 16:01:07.414280 | orchestrator | Saturday 12 July 2025 15:55:15 +0000 (0:00:03.399) 0:02:57.058 ********* 2025-07-12 16:01:07.414291 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-07-12 16:01:07.414303 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-07-12 16:01:07.414314 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-07-12 16:01:07.414326 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-07-12 16:01:07.414337 | orchestrator | 2025-07-12 16:01:07.414347 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-07-12 16:01:07.414358 | orchestrator | Saturday 12 July 2025 15:55:22 +0000 (0:00:07.032) 0:03:04.091 ********* 2025-07-12 16:01:07.414369 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 16:01:07.414380 | orchestrator | 2025-07-12 16:01:07.414390 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-07-12 16:01:07.414401 | orchestrator | Saturday 12 July 2025 15:55:25 +0000 (0:00:03.691) 0:03:07.782 ********* 2025-07-12 16:01:07.414412 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 16:01:07.414422 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-07-12 16:01:07.414433 | orchestrator | 2025-07-12 16:01:07.414444 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-07-12 16:01:07.414454 | orchestrator | Saturday 12 July 2025 15:55:29 +0000 (0:00:04.158) 0:03:11.940 ********* 2025-07-12 16:01:07.414465 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 16:01:07.414476 | orchestrator | 2025-07-12 16:01:07.414486 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-07-12 16:01:07.414497 | orchestrator | Saturday 12 July 2025 15:55:33 +0000 (0:00:03.644) 0:03:15.585 ********* 2025-07-12 16:01:07.414508 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-07-12 16:01:07.414519 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-07-12 16:01:07.414537 | orchestrator | 2025-07-12 16:01:07.414548 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-07-12 16:01:07.414567 | orchestrator | Saturday 12 July 2025 15:55:41 +0000 (0:00:08.245) 0:03:23.831 ********* 2025-07-12 16:01:07.414584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 16:01:07.414607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 16:01:07.414620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 16:01:07.414648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.414662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.414674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.414686 | orchestrator | 2025-07-12 16:01:07.414697 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-07-12 16:01:07.414708 | orchestrator | Saturday 12 July 2025 15:55:43 +0000 (0:00:01.842) 0:03:25.673 ********* 2025-07-12 16:01:07.414719 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.414730 | orchestrator | 2025-07-12 16:01:07.414741 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-07-12 16:01:07.414752 | orchestrator | Saturday 12 July 2025 15:55:43 +0000 (0:00:00.232) 0:03:25.906 ********* 2025-07-12 16:01:07.414762 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.414773 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.414784 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.414795 | orchestrator | 2025-07-12 16:01:07.414806 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-07-12 16:01:07.414816 | orchestrator | Saturday 12 July 2025 15:55:44 +0000 (0:00:00.816) 0:03:26.722 ********* 2025-07-12 16:01:07.414870 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 16:01:07.414882 | orchestrator | 2025-07-12 16:01:07.414954 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-07-12 16:01:07.414967 | orchestrator | Saturday 12 July 2025 15:55:45 +0000 (0:00:00.735) 0:03:27.458 ********* 2025-07-12 16:01:07.414978 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.414989 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.415000 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.415010 | orchestrator | 2025-07-12 16:01:07.415021 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-07-12 16:01:07.415032 | orchestrator | Saturday 12 July 2025 15:55:45 +0000 (0:00:00.285) 0:03:27.744 ********* 2025-07-12 16:01:07.415115 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 16:01:07.415129 | orchestrator | 2025-07-12 16:01:07.415140 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-07-12 16:01:07.415151 | orchestrator | Saturday 12 July 2025 15:55:47 +0000 (0:00:01.750) 0:03:29.495 ********* 2025-07-12 16:01:07.415181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 16:01:07.415195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 16:01:07.415214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 16:01:07.415227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.415306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.415328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.415340 | orchestrator | 2025-07-12 16:01:07.415351 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-07-12 16:01:07.415362 | orchestrator | Saturday 12 July 2025 15:55:50 +0000 (0:00:02.807) 0:03:32.302 ********* 2025-07-12 16:01:07.415374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 16:01:07.415398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 16:01:07.415435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 16:01:07.415448 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.415466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 16:01:07.415478 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.415490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 16:01:07.415502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 16:01:07.415514 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.415525 | orchestrator | 2025-07-12 16:01:07.415536 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-07-12 16:01:07.415547 | orchestrator | Saturday 12 July 2025 15:55:51 +0000 (0:00:01.010) 0:03:33.313 ********* 2025-07-12 16:01:07.415563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 16:01:07.415585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 16:01:07.415597 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.415617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 16:01:07.415630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 16:01:07.415641 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.415658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 16:01:07.415679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 16:01:07.415690 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.415701 | orchestrator | 2025-07-12 16:01:07.415712 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-07-12 16:01:07.415723 | orchestrator | Saturday 12 July 2025 15:55:52 +0000 (0:00:01.115) 0:03:34.428 ********* 2025-07-12 16:01:07.415742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 16:01:07.415761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 16:01:07.415780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 16:01:07.415798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.415811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.415823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.415834 | orchestrator | 2025-07-12 16:01:07.415845 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-07-12 16:01:07.415856 | orchestrator | Saturday 12 July 2025 15:55:55 +0000 (0:00:03.271) 0:03:37.700 ********* 2025-07-12 16:01:07.415868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 16:01:07.415888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 16:01:07.415970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 16:01:07.415983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.416043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.416062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.416073 | orchestrator | 2025-07-12 16:01:07.416085 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-07-12 16:01:07.416096 | orchestrator | Saturday 12 July 2025 15:56:03 +0000 (0:00:07.703) 0:03:45.403 ********* 2025-07-12 16:01:07.416115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 16:01:07.416128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 16:01:07.416139 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.416152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 16:01:07.416174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 16:01:07.416186 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.416198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 16:01:07.416218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 16:01:07.416230 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.416241 | orchestrator | 2025-07-12 16:01:07.416252 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-07-12 16:01:07.416263 | orchestrator | Saturday 12 July 2025 15:56:03 +0000 (0:00:00.533) 0:03:45.937 ********* 2025-07-12 16:01:07.416274 | orchestrator | changed: [testbed-node-1] 2025-07-12 16:01:07.416285 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:01:07.416295 | orchestrator | changed: [testbed-node-2] 2025-07-12 16:01:07.416306 | orchestrator | 2025-07-12 16:01:07.416317 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-07-12 16:01:07.416327 | orchestrator | Saturday 12 July 2025 15:56:06 +0000 (0:00:02.843) 0:03:48.781 ********* 2025-07-12 16:01:07.416338 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.416355 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.416365 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.416376 | orchestrator | 2025-07-12 16:01:07.416387 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-07-12 16:01:07.416397 | orchestrator | Saturday 12 July 2025 15:56:07 +0000 (0:00:00.305) 0:03:49.086 ********* 2025-07-12 16:01:07.416417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 16:01:07.416430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 16:01:07.416450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 16:01:07.416468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.416478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.416493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.416503 | orchestrator | 2025-07-12 16:01:07.416513 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-07-12 16:01:07.416523 | orchestrator | Saturday 12 July 2025 15:56:09 +0000 (0:00:02.451) 0:03:51.538 ********* 2025-07-12 16:01:07.416533 | orchestrator | 2025-07-12 16:01:07.416542 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-07-12 16:01:07.416552 | orchestrator | Saturday 12 July 2025 15:56:09 +0000 (0:00:00.259) 0:03:51.798 ********* 2025-07-12 16:01:07.416561 | orchestrator | 2025-07-12 16:01:07.416571 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-07-12 16:01:07.416580 | orchestrator | Saturday 12 July 2025 15:56:10 +0000 (0:00:00.276) 0:03:52.074 ********* 2025-07-12 16:01:07.416590 | orchestrator | 2025-07-12 16:01:07.416599 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-07-12 16:01:07.416609 | orchestrator | Saturday 12 July 2025 15:56:11 +0000 (0:00:01.032) 0:03:53.107 ********* 2025-07-12 16:01:07.416618 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:01:07.416628 | orchestrator | changed: [testbed-node-1] 2025-07-12 16:01:07.416637 | orchestrator | changed: [testbed-node-2] 2025-07-12 16:01:07.416647 | orchestrator | 2025-07-12 16:01:07.416656 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-07-12 16:01:07.416665 | orchestrator | Saturday 12 July 2025 15:56:35 +0000 (0:00:24.745) 0:04:17.853 ********* 2025-07-12 16:01:07.416675 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:01:07.416684 | orchestrator | changed: [testbed-node-1] 2025-07-12 16:01:07.416694 | orchestrator | changed: [testbed-node-2] 2025-07-12 16:01:07.416703 | orchestrator | 2025-07-12 16:01:07.416713 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-07-12 16:01:07.416722 | orchestrator | 2025-07-12 16:01:07.416732 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-12 16:01:07.416741 | orchestrator | Saturday 12 July 2025 15:56:43 +0000 (0:00:07.664) 0:04:25.518 ********* 2025-07-12 16:01:07.416751 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 16:01:07.416768 | orchestrator | 2025-07-12 16:01:07.416783 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-12 16:01:07.416793 | orchestrator | Saturday 12 July 2025 15:56:46 +0000 (0:00:02.749) 0:04:28.267 ********* 2025-07-12 16:01:07.416803 | orchestrator | skipping: [testbed-node-3] 2025-07-12 16:01:07.416812 | orchestrator | skipping: [testbed-node-4] 2025-07-12 16:01:07.416821 | orchestrator | skipping: [testbed-node-5] 2025-07-12 16:01:07.416831 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.416840 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.416850 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.416859 | orchestrator | 2025-07-12 16:01:07.416868 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-07-12 16:01:07.416878 | orchestrator | Saturday 12 July 2025 15:56:46 +0000 (0:00:00.606) 0:04:28.874 ********* 2025-07-12 16:01:07.416888 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.416914 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.416924 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.416934 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 16:01:07.416943 | orchestrator | 2025-07-12 16:01:07.416953 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-07-12 16:01:07.416962 | orchestrator | Saturday 12 July 2025 15:56:48 +0000 (0:00:01.453) 0:04:30.328 ********* 2025-07-12 16:01:07.416972 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-07-12 16:01:07.416982 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-07-12 16:01:07.416991 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-07-12 16:01:07.417001 | orchestrator | 2025-07-12 16:01:07.417010 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-07-12 16:01:07.417020 | orchestrator | Saturday 12 July 2025 15:56:49 +0000 (0:00:00.984) 0:04:31.314 ********* 2025-07-12 16:01:07.417029 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-07-12 16:01:07.417039 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-07-12 16:01:07.417049 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-07-12 16:01:07.417059 | orchestrator | 2025-07-12 16:01:07.417068 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-07-12 16:01:07.417078 | orchestrator | Saturday 12 July 2025 15:56:50 +0000 (0:00:01.356) 0:04:32.671 ********* 2025-07-12 16:01:07.417087 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-07-12 16:01:07.417097 | orchestrator | skipping: [testbed-node-3] 2025-07-12 16:01:07.417106 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-07-12 16:01:07.417116 | orchestrator | skipping: [testbed-node-4] 2025-07-12 16:01:07.417125 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-07-12 16:01:07.417135 | orchestrator | skipping: [testbed-node-5] 2025-07-12 16:01:07.417144 | orchestrator | 2025-07-12 16:01:07.417154 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-07-12 16:01:07.417163 | orchestrator | Saturday 12 July 2025 15:56:51 +0000 (0:00:00.855) 0:04:33.526 ********* 2025-07-12 16:01:07.417173 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 16:01:07.417182 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 16:01:07.417192 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.417206 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 16:01:07.417216 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 16:01:07.417225 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.417235 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 16:01:07.417244 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 16:01:07.417260 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.417270 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-07-12 16:01:07.417279 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-07-12 16:01:07.417289 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-07-12 16:01:07.417298 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-07-12 16:01:07.417308 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-07-12 16:01:07.417317 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-07-12 16:01:07.417327 | orchestrator | 2025-07-12 16:01:07.417336 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-07-12 16:01:07.417346 | orchestrator | Saturday 12 July 2025 15:56:52 +0000 (0:00:01.206) 0:04:34.733 ********* 2025-07-12 16:01:07.417355 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.417365 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.417374 | orchestrator | changed: [testbed-node-3] 2025-07-12 16:01:07.417384 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.417393 | orchestrator | changed: [testbed-node-4] 2025-07-12 16:01:07.417403 | orchestrator | changed: [testbed-node-5] 2025-07-12 16:01:07.417412 | orchestrator | 2025-07-12 16:01:07.417422 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-07-12 16:01:07.417431 | orchestrator | Saturday 12 July 2025 15:56:54 +0000 (0:00:01.578) 0:04:36.311 ********* 2025-07-12 16:01:07.417441 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.417450 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.417460 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.417469 | orchestrator | changed: [testbed-node-5] 2025-07-12 16:01:07.417478 | orchestrator | changed: [testbed-node-4] 2025-07-12 16:01:07.417488 | orchestrator | changed: [testbed-node-3] 2025-07-12 16:01:07.417497 | orchestrator | 2025-07-12 16:01:07.417507 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-07-12 16:01:07.417516 | orchestrator | Saturday 12 July 2025 15:56:56 +0000 (0:00:01.894) 0:04:38.206 ********* 2025-07-12 16:01:07.417533 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 16:01:07.417545 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 16:01:07.417568 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 16:01:07.417579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 16:01:07.417590 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 16:01:07.417608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 16:01:07.417619 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 16:01:07.417629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 16:01:07.417646 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 16:01:07.417661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.417672 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.417688 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.417699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.417710 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.417732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.417742 | orchestrator | 2025-07-12 16:01:07.417752 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-12 16:01:07.417762 | orchestrator | Saturday 12 July 2025 15:56:59 +0000 (0:00:03.033) 0:04:41.240 ********* 2025-07-12 16:01:07.417772 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 16:01:07.417782 | orchestrator | 2025-07-12 16:01:07.417791 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-07-12 16:01:07.417801 | orchestrator | Saturday 12 July 2025 15:57:00 +0000 (0:00:01.167) 0:04:42.407 ********* 2025-07-12 16:01:07.417811 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 16:01:07.417828 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 16:01:07.417840 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 16:01:07.417857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 16:01:07.417871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 16:01:07.417882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 16:01:07.417911 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 16:01:07.417928 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 16:01:07.417939 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 16:01:07.417956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.417966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.417983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.417994 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.418448 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.418475 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.418497 | orchestrator | 2025-07-12 16:01:07.418508 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-07-12 16:01:07.418518 | orchestrator | Saturday 12 July 2025 15:57:04 +0000 (0:00:03.598) 0:04:46.005 ********* 2025-07-12 16:01:07.418528 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 16:01:07.418545 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 16:01:07.418556 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 16:01:07.418566 | orchestrator | skipping: [testbed-node-3] 2025-07-12 16:01:07.418585 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 16:01:07.418596 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 16:01:07.418613 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 16:01:07.418624 | orchestrator | skipping: [testbed-node-4] 2025-07-12 16:01:07.418638 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 16:01:07.418649 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 16:01:07.418660 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 16:01:07.418670 | orchestrator | skipping: [testbed-node-5] 2025-07-12 16:01:07.418686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 16:01:07.418703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 16:01:07.418713 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.418723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 16:01:07.418738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 16:01:07.418749 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.418759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 16:01:07.418769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 16:01:07.418779 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.418789 | orchestrator | 2025-07-12 16:01:07.418799 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-07-12 16:01:07.418809 | orchestrator | Saturday 12 July 2025 15:57:06 +0000 (0:00:02.278) 0:04:48.284 ********* 2025-07-12 16:01:07.418825 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 16:01:07.418842 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 16:01:07.418852 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 16:01:07.418863 | orchestrator | skipping: [testbed-node-3] 2025-07-12 16:01:07.418877 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 16:01:07.418887 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 16:01:07.418972 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 16:01:07.418994 | orchestrator | skipping: [testbed-node-4] 2025-07-12 16:01:07.419004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 16:01:07.419014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 16:01:07.419024 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.419039 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 16:01:07.419051 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 16:01:07.419064 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 16:01:07.419083 | orchestrator | skipping: [testbed-node-5] 2025-07-12 16:01:07.419102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 16:01:07.419114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 16:01:07.419126 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.419138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 16:01:07.419154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 16:01:07.419166 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.419177 | orchestrator | 2025-07-12 16:01:07.419189 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-12 16:01:07.419203 | orchestrator | Saturday 12 July 2025 15:57:08 +0000 (0:00:02.178) 0:04:50.463 ********* 2025-07-12 16:01:07.419221 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.419234 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.419246 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.419257 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 16:01:07.419270 | orchestrator | 2025-07-12 16:01:07.419288 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-07-12 16:01:07.419304 | orchestrator | Saturday 12 July 2025 15:57:09 +0000 (0:00:00.738) 0:04:51.201 ********* 2025-07-12 16:01:07.419322 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-12 16:01:07.419343 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-12 16:01:07.419355 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-12 16:01:07.419367 | orchestrator | 2025-07-12 16:01:07.419378 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-07-12 16:01:07.419390 | orchestrator | Saturday 12 July 2025 15:57:10 +0000 (0:00:00.960) 0:04:52.161 ********* 2025-07-12 16:01:07.419401 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-12 16:01:07.419412 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-12 16:01:07.419421 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-12 16:01:07.419431 | orchestrator | 2025-07-12 16:01:07.419440 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-07-12 16:01:07.419450 | orchestrator | Saturday 12 July 2025 15:57:11 +0000 (0:00:00.915) 0:04:53.077 ********* 2025-07-12 16:01:07.419460 | orchestrator | ok: [testbed-node-3] 2025-07-12 16:01:07.419469 | orchestrator | ok: [testbed-node-4] 2025-07-12 16:01:07.419477 | orchestrator | ok: [testbed-node-5] 2025-07-12 16:01:07.419485 | orchestrator | 2025-07-12 16:01:07.419493 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-07-12 16:01:07.419501 | orchestrator | Saturday 12 July 2025 15:57:11 +0000 (0:00:00.437) 0:04:53.515 ********* 2025-07-12 16:01:07.419508 | orchestrator | ok: [testbed-node-3] 2025-07-12 16:01:07.419516 | orchestrator | ok: [testbed-node-4] 2025-07-12 16:01:07.419524 | orchestrator | ok: [testbed-node-5] 2025-07-12 16:01:07.419532 | orchestrator | 2025-07-12 16:01:07.419540 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-07-12 16:01:07.419548 | orchestrator | Saturday 12 July 2025 15:57:12 +0000 (0:00:00.536) 0:04:54.051 ********* 2025-07-12 16:01:07.419556 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-07-12 16:01:07.419569 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-07-12 16:01:07.419578 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-07-12 16:01:07.419586 | orchestrator | 2025-07-12 16:01:07.419593 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-07-12 16:01:07.419601 | orchestrator | Saturday 12 July 2025 15:57:13 +0000 (0:00:01.448) 0:04:55.500 ********* 2025-07-12 16:01:07.419609 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-07-12 16:01:07.419617 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-07-12 16:01:07.419625 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-07-12 16:01:07.419633 | orchestrator | 2025-07-12 16:01:07.419640 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-07-12 16:01:07.419648 | orchestrator | Saturday 12 July 2025 15:57:14 +0000 (0:00:01.198) 0:04:56.699 ********* 2025-07-12 16:01:07.419656 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-07-12 16:01:07.419664 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-07-12 16:01:07.419672 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-07-12 16:01:07.419679 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-07-12 16:01:07.419687 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-07-12 16:01:07.419695 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-07-12 16:01:07.419703 | orchestrator | 2025-07-12 16:01:07.419711 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-07-12 16:01:07.419719 | orchestrator | Saturday 12 July 2025 15:57:18 +0000 (0:00:03.992) 0:05:00.691 ********* 2025-07-12 16:01:07.419727 | orchestrator | skipping: [testbed-node-3] 2025-07-12 16:01:07.419734 | orchestrator | skipping: [testbed-node-4] 2025-07-12 16:01:07.419742 | orchestrator | skipping: [testbed-node-5] 2025-07-12 16:01:07.419750 | orchestrator | 2025-07-12 16:01:07.419758 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-07-12 16:01:07.419766 | orchestrator | Saturday 12 July 2025 15:57:19 +0000 (0:00:00.302) 0:05:00.994 ********* 2025-07-12 16:01:07.419774 | orchestrator | skipping: [testbed-node-3] 2025-07-12 16:01:07.419781 | orchestrator | skipping: [testbed-node-4] 2025-07-12 16:01:07.419794 | orchestrator | skipping: [testbed-node-5] 2025-07-12 16:01:07.419802 | orchestrator | 2025-07-12 16:01:07.419810 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-07-12 16:01:07.419818 | orchestrator | Saturday 12 July 2025 15:57:19 +0000 (0:00:00.303) 0:05:01.297 ********* 2025-07-12 16:01:07.419826 | orchestrator | changed: [testbed-node-3] 2025-07-12 16:01:07.419834 | orchestrator | changed: [testbed-node-4] 2025-07-12 16:01:07.419842 | orchestrator | changed: [testbed-node-5] 2025-07-12 16:01:07.419849 | orchestrator | 2025-07-12 16:01:07.419857 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-07-12 16:01:07.419865 | orchestrator | Saturday 12 July 2025 15:57:20 +0000 (0:00:01.438) 0:05:02.736 ********* 2025-07-12 16:01:07.419873 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-07-12 16:01:07.419889 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-07-12 16:01:07.419913 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-07-12 16:01:07.419921 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-07-12 16:01:07.419929 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-07-12 16:01:07.419937 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-07-12 16:01:07.419945 | orchestrator | 2025-07-12 16:01:07.419953 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-07-12 16:01:07.419961 | orchestrator | Saturday 12 July 2025 15:57:23 +0000 (0:00:03.215) 0:05:05.951 ********* 2025-07-12 16:01:07.419968 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-12 16:01:07.419976 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-12 16:01:07.419984 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-12 16:01:07.419992 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-12 16:01:07.420000 | orchestrator | changed: [testbed-node-3] 2025-07-12 16:01:07.420007 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-12 16:01:07.420015 | orchestrator | changed: [testbed-node-4] 2025-07-12 16:01:07.420023 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-12 16:01:07.420030 | orchestrator | changed: [testbed-node-5] 2025-07-12 16:01:07.420038 | orchestrator | 2025-07-12 16:01:07.420046 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-07-12 16:01:07.420054 | orchestrator | Saturday 12 July 2025 15:57:27 +0000 (0:00:03.445) 0:05:09.396 ********* 2025-07-12 16:01:07.420062 | orchestrator | skipping: [testbed-node-3] 2025-07-12 16:01:07.420070 | orchestrator | 2025-07-12 16:01:07.420077 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-07-12 16:01:07.420085 | orchestrator | Saturday 12 July 2025 15:57:27 +0000 (0:00:00.235) 0:05:09.631 ********* 2025-07-12 16:01:07.420093 | orchestrator | skipping: [testbed-node-3] 2025-07-12 16:01:07.420101 | orchestrator | skipping: [testbed-node-4] 2025-07-12 16:01:07.420109 | orchestrator | skipping: [testbed-node-5] 2025-07-12 16:01:07.420116 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.420124 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.420132 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.420140 | orchestrator | 2025-07-12 16:01:07.420147 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-07-12 16:01:07.420160 | orchestrator | Saturday 12 July 2025 15:57:28 +0000 (0:00:01.193) 0:05:10.825 ********* 2025-07-12 16:01:07.420168 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-12 16:01:07.420176 | orchestrator | 2025-07-12 16:01:07.420189 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-07-12 16:01:07.420197 | orchestrator | Saturday 12 July 2025 15:57:29 +0000 (0:00:00.925) 0:05:11.751 ********* 2025-07-12 16:01:07.420205 | orchestrator | skipping: [testbed-node-3] 2025-07-12 16:01:07.420213 | orchestrator | skipping: [testbed-node-4] 2025-07-12 16:01:07.420220 | orchestrator | skipping: [testbed-node-5] 2025-07-12 16:01:07.420228 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.420236 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.420243 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.420251 | orchestrator | 2025-07-12 16:01:07.420259 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-07-12 16:01:07.420267 | orchestrator | Saturday 12 July 2025 15:57:30 +0000 (0:00:00.532) 0:05:12.284 ********* 2025-07-12 16:01:07.420276 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 16:01:07.420289 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 16:01:07.420298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 16:01:07.420307 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 16:01:07.420325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 16:01:07.420334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 16:01:07.420342 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 16:01:07.420354 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 16:01:07.420363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.420371 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 16:01:07.420380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.420398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.420406 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.420415 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.420427 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.420436 | orchestrator | 2025-07-12 16:01:07.420444 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-07-12 16:01:07.420452 | orchestrator | Saturday 12 July 2025 15:57:34 +0000 (0:00:04.255) 0:05:16.539 ********* 2025-07-12 16:01:07.420460 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 16:01:07.420478 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 16:01:07.420486 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 16:01:07.420495 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 16:01:07.420507 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 16:01:07.420515 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 16:01:07.420533 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.420542 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.420550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 16:01:07.420562 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.420571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 16:01:07.420584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 16:01:07.420598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.420606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.420615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.420623 | orchestrator | 2025-07-12 16:01:07.420631 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-07-12 16:01:07.420639 | orchestrator | Saturday 12 July 2025 15:57:40 +0000 (0:00:06.191) 0:05:22.731 ********* 2025-07-12 16:01:07.420647 | orchestrator | skipping: [testbed-node-4] 2025-07-12 16:01:07.420655 | orchestrator | skipping: [testbed-node-3] 2025-07-12 16:01:07.420663 | orchestrator | skipping: [testbed-node-5] 2025-07-12 16:01:07.420671 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.420678 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.420686 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.420694 | orchestrator | 2025-07-12 16:01:07.420702 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-07-12 16:01:07.420710 | orchestrator | Saturday 12 July 2025 15:57:42 +0000 (0:00:01.309) 0:05:24.040 ********* 2025-07-12 16:01:07.420717 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-07-12 16:01:07.420725 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-07-12 16:01:07.420737 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-07-12 16:01:07.420745 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-07-12 16:01:07.420753 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-07-12 16:01:07.420761 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.420774 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-07-12 16:01:07.420782 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.420790 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-07-12 16:01:07.420797 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-07-12 16:01:07.420805 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-07-12 16:01:07.420813 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.420821 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-07-12 16:01:07.420829 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-07-12 16:01:07.420837 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-07-12 16:01:07.420845 | orchestrator | 2025-07-12 16:01:07.420853 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-07-12 16:01:07.420860 | orchestrator | Saturday 12 July 2025 15:57:45 +0000 (0:00:03.382) 0:05:27.422 ********* 2025-07-12 16:01:07.420868 | orchestrator | skipping: [testbed-node-3] 2025-07-12 16:01:07.420876 | orchestrator | skipping: [testbed-node-4] 2025-07-12 16:01:07.420884 | orchestrator | skipping: [testbed-node-5] 2025-07-12 16:01:07.420909 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.420917 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.420925 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.420933 | orchestrator | 2025-07-12 16:01:07.420941 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-07-12 16:01:07.420949 | orchestrator | Saturday 12 July 2025 15:57:46 +0000 (0:00:00.768) 0:05:28.191 ********* 2025-07-12 16:01:07.420957 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-07-12 16:01:07.420965 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-07-12 16:01:07.420977 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-07-12 16:01:07.420985 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-07-12 16:01:07.420993 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-07-12 16:01:07.421001 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-07-12 16:01:07.421009 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-07-12 16:01:07.421017 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-07-12 16:01:07.421025 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-07-12 16:01:07.421033 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-07-12 16:01:07.421040 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.421048 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-07-12 16:01:07.421056 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.421064 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-07-12 16:01:07.421072 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.421080 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-07-12 16:01:07.421093 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-07-12 16:01:07.421101 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-07-12 16:01:07.421108 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-07-12 16:01:07.421116 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-07-12 16:01:07.421124 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-07-12 16:01:07.421132 | orchestrator | 2025-07-12 16:01:07.421140 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-07-12 16:01:07.421148 | orchestrator | Saturday 12 July 2025 15:57:52 +0000 (0:00:06.366) 0:05:34.558 ********* 2025-07-12 16:01:07.421156 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-07-12 16:01:07.421169 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-07-12 16:01:07.421177 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-07-12 16:01:07.421185 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-07-12 16:01:07.421193 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-12 16:01:07.421201 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-12 16:01:07.421209 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-07-12 16:01:07.421216 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-12 16:01:07.421224 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-07-12 16:01:07.421232 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-12 16:01:07.421240 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-12 16:01:07.421247 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-12 16:01:07.421255 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-12 16:01:07.421263 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-12 16:01:07.421271 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-07-12 16:01:07.421278 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.421286 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-07-12 16:01:07.421294 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.421302 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-07-12 16:01:07.421310 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.421318 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-12 16:01:07.421326 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-12 16:01:07.421333 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-12 16:01:07.421345 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-12 16:01:07.421353 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-12 16:01:07.421361 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-12 16:01:07.421369 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-12 16:01:07.421381 | orchestrator | 2025-07-12 16:01:07.421389 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-07-12 16:01:07.421397 | orchestrator | Saturday 12 July 2025 15:58:01 +0000 (0:00:08.689) 0:05:43.247 ********* 2025-07-12 16:01:07.421405 | orchestrator | skipping: [testbed-node-3] 2025-07-12 16:01:07.421413 | orchestrator | skipping: [testbed-node-4] 2025-07-12 16:01:07.421421 | orchestrator | skipping: [testbed-node-5] 2025-07-12 16:01:07.421428 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.421436 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.421444 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.421452 | orchestrator | 2025-07-12 16:01:07.421459 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-07-12 16:01:07.421467 | orchestrator | Saturday 12 July 2025 15:58:01 +0000 (0:00:00.527) 0:05:43.775 ********* 2025-07-12 16:01:07.421475 | orchestrator | skipping: [testbed-node-3] 2025-07-12 16:01:07.421483 | orchestrator | skipping: [testbed-node-4] 2025-07-12 16:01:07.421491 | orchestrator | skipping: [testbed-node-5] 2025-07-12 16:01:07.421499 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.421506 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.421514 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.421522 | orchestrator | 2025-07-12 16:01:07.421530 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-07-12 16:01:07.421537 | orchestrator | Saturday 12 July 2025 15:58:02 +0000 (0:00:00.753) 0:05:44.529 ********* 2025-07-12 16:01:07.421545 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.421553 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.421561 | orchestrator | changed: [testbed-node-3] 2025-07-12 16:01:07.421569 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.421576 | orchestrator | changed: [testbed-node-4] 2025-07-12 16:01:07.421584 | orchestrator | changed: [testbed-node-5] 2025-07-12 16:01:07.421592 | orchestrator | 2025-07-12 16:01:07.421600 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-07-12 16:01:07.421608 | orchestrator | Saturday 12 July 2025 15:58:04 +0000 (0:00:02.204) 0:05:46.733 ********* 2025-07-12 16:01:07.421619 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 16:01:07.421628 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 16:01:07.421637 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 16:01:07.421654 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 16:01:07.421663 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 16:01:07.421672 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 16:01:07.421680 | orchestrator | skipping: [testbed-node-3] 2025-07-12 16:01:07.421692 | orchestrator | skipping: [testbed-node-4] 2025-07-12 16:01:07.421701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 16:01:07.421709 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 16:01:07.421771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 16:01:07.421782 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.421790 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 16:01:07.421799 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 16:01:07.421807 | orchestrator | skipping: [testbed-node-5] 2025-07-12 16:01:07.421819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 16:01:07.421828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 16:01:07.421836 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.421844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 16:01:07.421888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 16:01:07.421942 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.421951 | orchestrator | 2025-07-12 16:01:07.421959 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-07-12 16:01:07.421967 | orchestrator | Saturday 12 July 2025 15:58:07 +0000 (0:00:02.655) 0:05:49.389 ********* 2025-07-12 16:01:07.421975 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-07-12 16:01:07.421983 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-07-12 16:01:07.421991 | orchestrator | skipping: [testbed-node-3] 2025-07-12 16:01:07.421999 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-07-12 16:01:07.422007 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-07-12 16:01:07.422040 | orchestrator | skipping: [testbed-node-4] 2025-07-12 16:01:07.422049 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-07-12 16:01:07.422056 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-07-12 16:01:07.422063 | orchestrator | skipping: [testbed-node-5] 2025-07-12 16:01:07.422069 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-07-12 16:01:07.422076 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-07-12 16:01:07.422082 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-07-12 16:01:07.422089 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-07-12 16:01:07.422096 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.422102 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.422109 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-07-12 16:01:07.422115 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-07-12 16:01:07.422122 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.422128 | orchestrator | 2025-07-12 16:01:07.422135 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-07-12 16:01:07.422142 | orchestrator | Saturday 12 July 2025 15:58:08 +0000 (0:00:00.794) 0:05:50.184 ********* 2025-07-12 16:01:07.422153 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 16:01:07.422166 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 16:01:07.422178 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 16:01:07.422186 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 16:01:07.422193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 16:01:07.422200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 16:01:07.422210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 16:01:07.422222 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 16:01:07.422229 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 16:01:07.422241 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.422249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.422256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.422263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.422279 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.422286 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 16:01:07.422293 | orchestrator | 2025-07-12 16:01:07.422300 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-12 16:01:07.422307 | orchestrator | Saturday 12 July 2025 15:58:11 +0000 (0:00:03.740) 0:05:53.925 ********* 2025-07-12 16:01:07.422314 | orchestrator | skipping: [testbed-node-3] 2025-07-12 16:01:07.422320 | orchestrator | skipping: [testbed-node-4] 2025-07-12 16:01:07.422327 | orchestrator | skipping: [testbed-node-5] 2025-07-12 16:01:07.422337 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.422344 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.422350 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.422357 | orchestrator | 2025-07-12 16:01:07.422364 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-12 16:01:07.422370 | orchestrator | Saturday 12 July 2025 15:58:12 +0000 (0:00:00.478) 0:05:54.403 ********* 2025-07-12 16:01:07.422377 | orchestrator | 2025-07-12 16:01:07.422384 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-12 16:01:07.422390 | orchestrator | Saturday 12 July 2025 15:58:12 +0000 (0:00:00.241) 0:05:54.645 ********* 2025-07-12 16:01:07.422397 | orchestrator | 2025-07-12 16:01:07.422403 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-12 16:01:07.422410 | orchestrator | Saturday 12 July 2025 15:58:12 +0000 (0:00:00.121) 0:05:54.767 ********* 2025-07-12 16:01:07.422417 | orchestrator | 2025-07-12 16:01:07.422423 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-12 16:01:07.422430 | orchestrator | Saturday 12 July 2025 15:58:12 +0000 (0:00:00.123) 0:05:54.890 ********* 2025-07-12 16:01:07.422436 | orchestrator | 2025-07-12 16:01:07.422443 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-12 16:01:07.422450 | orchestrator | Saturday 12 July 2025 15:58:13 +0000 (0:00:00.119) 0:05:55.009 ********* 2025-07-12 16:01:07.422456 | orchestrator | 2025-07-12 16:01:07.422463 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-12 16:01:07.422469 | orchestrator | Saturday 12 July 2025 15:58:13 +0000 (0:00:00.113) 0:05:55.123 ********* 2025-07-12 16:01:07.422480 | orchestrator | 2025-07-12 16:01:07.422487 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-07-12 16:01:07.422494 | orchestrator | Saturday 12 July 2025 15:58:13 +0000 (0:00:00.115) 0:05:55.238 ********* 2025-07-12 16:01:07.422500 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:01:07.422507 | orchestrator | changed: [testbed-node-2] 2025-07-12 16:01:07.422513 | orchestrator | changed: [testbed-node-1] 2025-07-12 16:01:07.422520 | orchestrator | 2025-07-12 16:01:07.422527 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-07-12 16:01:07.422533 | orchestrator | Saturday 12 July 2025 15:58:20 +0000 (0:00:06.902) 0:06:02.141 ********* 2025-07-12 16:01:07.422540 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:01:07.422547 | orchestrator | changed: [testbed-node-2] 2025-07-12 16:01:07.422553 | orchestrator | changed: [testbed-node-1] 2025-07-12 16:01:07.422560 | orchestrator | 2025-07-12 16:01:07.422567 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-07-12 16:01:07.422573 | orchestrator | Saturday 12 July 2025 15:58:33 +0000 (0:00:13.342) 0:06:15.483 ********* 2025-07-12 16:01:07.422580 | orchestrator | changed: [testbed-node-4] 2025-07-12 16:01:07.422587 | orchestrator | changed: [testbed-node-5] 2025-07-12 16:01:07.422593 | orchestrator | changed: [testbed-node-3] 2025-07-12 16:01:07.422600 | orchestrator | 2025-07-12 16:01:07.422606 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-07-12 16:01:07.422613 | orchestrator | Saturday 12 July 2025 15:58:57 +0000 (0:00:23.611) 0:06:39.094 ********* 2025-07-12 16:01:07.422620 | orchestrator | changed: [testbed-node-3] 2025-07-12 16:01:07.422626 | orchestrator | changed: [testbed-node-5] 2025-07-12 16:01:07.422633 | orchestrator | changed: [testbed-node-4] 2025-07-12 16:01:07.422640 | orchestrator | 2025-07-12 16:01:07.422652 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-07-12 16:01:07.422659 | orchestrator | Saturday 12 July 2025 15:59:34 +0000 (0:00:37.816) 0:07:16.911 ********* 2025-07-12 16:01:07.422666 | orchestrator | changed: [testbed-node-4] 2025-07-12 16:01:07.422672 | orchestrator | changed: [testbed-node-3] 2025-07-12 16:01:07.422679 | orchestrator | changed: [testbed-node-5] 2025-07-12 16:01:07.422686 | orchestrator | 2025-07-12 16:01:07.422692 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-07-12 16:01:07.422699 | orchestrator | Saturday 12 July 2025 15:59:35 +0000 (0:00:01.067) 0:07:17.979 ********* 2025-07-12 16:01:07.422705 | orchestrator | changed: [testbed-node-3] 2025-07-12 16:01:07.422712 | orchestrator | changed: [testbed-node-4] 2025-07-12 16:01:07.422719 | orchestrator | changed: [testbed-node-5] 2025-07-12 16:01:07.422725 | orchestrator | 2025-07-12 16:01:07.422732 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-07-12 16:01:07.422738 | orchestrator | Saturday 12 July 2025 15:59:36 +0000 (0:00:00.817) 0:07:18.797 ********* 2025-07-12 16:01:07.422745 | orchestrator | changed: [testbed-node-3] 2025-07-12 16:01:07.422752 | orchestrator | changed: [testbed-node-4] 2025-07-12 16:01:07.422758 | orchestrator | changed: [testbed-node-5] 2025-07-12 16:01:07.422765 | orchestrator | 2025-07-12 16:01:07.422772 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-07-12 16:01:07.422778 | orchestrator | Saturday 12 July 2025 15:59:56 +0000 (0:00:20.142) 0:07:38.940 ********* 2025-07-12 16:01:07.422785 | orchestrator | skipping: [testbed-node-3] 2025-07-12 16:01:07.422792 | orchestrator | 2025-07-12 16:01:07.422798 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-07-12 16:01:07.422805 | orchestrator | Saturday 12 July 2025 15:59:57 +0000 (0:00:00.138) 0:07:39.078 ********* 2025-07-12 16:01:07.422812 | orchestrator | skipping: [testbed-node-5] 2025-07-12 16:01:07.422818 | orchestrator | skipping: [testbed-node-4] 2025-07-12 16:01:07.422825 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.422832 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.422838 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.422851 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-07-12 16:01:07.422858 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-12 16:01:07.422865 | orchestrator | 2025-07-12 16:01:07.422872 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-07-12 16:01:07.422878 | orchestrator | Saturday 12 July 2025 16:00:19 +0000 (0:00:22.249) 0:08:01.328 ********* 2025-07-12 16:01:07.422885 | orchestrator | skipping: [testbed-node-3] 2025-07-12 16:01:07.422906 | orchestrator | skipping: [testbed-node-5] 2025-07-12 16:01:07.422913 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.422919 | orchestrator | skipping: [testbed-node-4] 2025-07-12 16:01:07.422929 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.422936 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.422943 | orchestrator | 2025-07-12 16:01:07.422949 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-07-12 16:01:07.422956 | orchestrator | Saturday 12 July 2025 16:00:29 +0000 (0:00:10.056) 0:08:11.384 ********* 2025-07-12 16:01:07.422963 | orchestrator | skipping: [testbed-node-5] 2025-07-12 16:01:07.422969 | orchestrator | skipping: [testbed-node-4] 2025-07-12 16:01:07.422976 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.422983 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.422989 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.422996 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2025-07-12 16:01:07.423002 | orchestrator | 2025-07-12 16:01:07.423009 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-07-12 16:01:07.423016 | orchestrator | Saturday 12 July 2025 16:00:33 +0000 (0:00:03.678) 0:08:15.062 ********* 2025-07-12 16:01:07.423023 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-12 16:01:07.423029 | orchestrator | 2025-07-12 16:01:07.423036 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-07-12 16:01:07.423043 | orchestrator | Saturday 12 July 2025 16:00:45 +0000 (0:00:12.880) 0:08:27.943 ********* 2025-07-12 16:01:07.423049 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-12 16:01:07.423056 | orchestrator | 2025-07-12 16:01:07.423063 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-07-12 16:01:07.423069 | orchestrator | Saturday 12 July 2025 16:00:47 +0000 (0:00:01.265) 0:08:29.209 ********* 2025-07-12 16:01:07.423076 | orchestrator | skipping: [testbed-node-3] 2025-07-12 16:01:07.423082 | orchestrator | 2025-07-12 16:01:07.423089 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-07-12 16:01:07.423096 | orchestrator | Saturday 12 July 2025 16:00:48 +0000 (0:00:01.199) 0:08:30.408 ********* 2025-07-12 16:01:07.423102 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-12 16:01:07.423109 | orchestrator | 2025-07-12 16:01:07.423116 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-07-12 16:01:07.423122 | orchestrator | Saturday 12 July 2025 16:00:59 +0000 (0:00:11.503) 0:08:41.912 ********* 2025-07-12 16:01:07.423129 | orchestrator | ok: [testbed-node-4] 2025-07-12 16:01:07.423136 | orchestrator | ok: [testbed-node-3] 2025-07-12 16:01:07.423142 | orchestrator | ok: [testbed-node-5] 2025-07-12 16:01:07.423149 | orchestrator | ok: [testbed-node-1] 2025-07-12 16:01:07.423156 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:01:07.423162 | orchestrator | ok: [testbed-node-2] 2025-07-12 16:01:07.423169 | orchestrator | 2025-07-12 16:01:07.423176 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-07-12 16:01:07.423182 | orchestrator | 2025-07-12 16:01:07.423189 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-07-12 16:01:07.423196 | orchestrator | Saturday 12 July 2025 16:01:01 +0000 (0:00:01.692) 0:08:43.605 ********* 2025-07-12 16:01:07.423202 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:01:07.423209 | orchestrator | changed: [testbed-node-1] 2025-07-12 16:01:07.423220 | orchestrator | changed: [testbed-node-2] 2025-07-12 16:01:07.423227 | orchestrator | 2025-07-12 16:01:07.423233 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-07-12 16:01:07.423240 | orchestrator | 2025-07-12 16:01:07.423250 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-07-12 16:01:07.423257 | orchestrator | Saturday 12 July 2025 16:01:02 +0000 (0:00:01.091) 0:08:44.697 ********* 2025-07-12 16:01:07.423263 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.423270 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.423277 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.423283 | orchestrator | 2025-07-12 16:01:07.423290 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-07-12 16:01:07.423297 | orchestrator | 2025-07-12 16:01:07.423303 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-07-12 16:01:07.423310 | orchestrator | Saturday 12 July 2025 16:01:03 +0000 (0:00:00.501) 0:08:45.198 ********* 2025-07-12 16:01:07.423317 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-07-12 16:01:07.423323 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-07-12 16:01:07.423330 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-07-12 16:01:07.423337 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-07-12 16:01:07.423343 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-07-12 16:01:07.423350 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-07-12 16:01:07.423357 | orchestrator | skipping: [testbed-node-3] 2025-07-12 16:01:07.423363 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-07-12 16:01:07.423370 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-07-12 16:01:07.423377 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-07-12 16:01:07.423384 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-07-12 16:01:07.423390 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-07-12 16:01:07.423397 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-07-12 16:01:07.423404 | orchestrator | skipping: [testbed-node-4] 2025-07-12 16:01:07.423410 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-07-12 16:01:07.423417 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-07-12 16:01:07.423424 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-07-12 16:01:07.423430 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-07-12 16:01:07.423437 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-07-12 16:01:07.423444 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-07-12 16:01:07.423450 | orchestrator | skipping: [testbed-node-5] 2025-07-12 16:01:07.423457 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-07-12 16:01:07.423467 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-07-12 16:01:07.423474 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-07-12 16:01:07.423480 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-07-12 16:01:07.423487 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-07-12 16:01:07.423494 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-07-12 16:01:07.423500 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.423507 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-07-12 16:01:07.423514 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-07-12 16:01:07.423520 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-07-12 16:01:07.423527 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-07-12 16:01:07.423534 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-07-12 16:01:07.423540 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-07-12 16:01:07.423566 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.423574 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-07-12 16:01:07.423580 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-07-12 16:01:07.423587 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-07-12 16:01:07.423594 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-07-12 16:01:07.423600 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-07-12 16:01:07.423607 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-07-12 16:01:07.423613 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.423620 | orchestrator | 2025-07-12 16:01:07.423627 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-07-12 16:01:07.423633 | orchestrator | 2025-07-12 16:01:07.423640 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-07-12 16:01:07.423647 | orchestrator | Saturday 12 July 2025 16:01:04 +0000 (0:00:01.249) 0:08:46.447 ********* 2025-07-12 16:01:07.423653 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-07-12 16:01:07.423660 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-07-12 16:01:07.423666 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.423673 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-07-12 16:01:07.423680 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-07-12 16:01:07.423686 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.423693 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-07-12 16:01:07.423699 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-07-12 16:01:07.423706 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.423712 | orchestrator | 2025-07-12 16:01:07.423719 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-07-12 16:01:07.423726 | orchestrator | 2025-07-12 16:01:07.423732 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-07-12 16:01:07.423739 | orchestrator | Saturday 12 July 2025 16:01:05 +0000 (0:00:00.850) 0:08:47.297 ********* 2025-07-12 16:01:07.423749 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.423756 | orchestrator | 2025-07-12 16:01:07.423763 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-07-12 16:01:07.423769 | orchestrator | 2025-07-12 16:01:07.423776 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-07-12 16:01:07.423782 | orchestrator | Saturday 12 July 2025 16:01:05 +0000 (0:00:00.665) 0:08:47.963 ********* 2025-07-12 16:01:07.423789 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:01:07.423796 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:01:07.423802 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:01:07.423809 | orchestrator | 2025-07-12 16:01:07.423815 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 16:01:07.423822 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 16:01:07.423830 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-07-12 16:01:07.423837 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-07-12 16:01:07.423843 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-07-12 16:01:07.423850 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-07-12 16:01:07.423856 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-07-12 16:01:07.423868 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-07-12 16:01:07.423874 | orchestrator | 2025-07-12 16:01:07.423881 | orchestrator | 2025-07-12 16:01:07.423888 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 16:01:07.423907 | orchestrator | Saturday 12 July 2025 16:01:06 +0000 (0:00:00.408) 0:08:48.372 ********* 2025-07-12 16:01:07.423914 | orchestrator | =============================================================================== 2025-07-12 16:01:07.423924 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 37.82s 2025-07-12 16:01:07.423931 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 30.73s 2025-07-12 16:01:07.423938 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 24.75s 2025-07-12 16:01:07.423944 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 23.61s 2025-07-12 16:01:07.423951 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.25s 2025-07-12 16:01:07.423958 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.58s 2025-07-12 16:01:07.423964 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 20.34s 2025-07-12 16:01:07.423971 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 20.14s 2025-07-12 16:01:07.423978 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.49s 2025-07-12 16:01:07.423984 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 13.34s 2025-07-12 16:01:07.423991 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.32s 2025-07-12 16:01:07.423998 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.97s 2025-07-12 16:01:07.424004 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.88s 2025-07-12 16:01:07.424011 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.73s 2025-07-12 16:01:07.424017 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.50s 2025-07-12 16:01:07.424024 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 10.06s 2025-07-12 16:01:07.424030 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.77s 2025-07-12 16:01:07.424037 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 8.69s 2025-07-12 16:01:07.424044 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 8.25s 2025-07-12 16:01:07.424050 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 7.70s 2025-07-12 16:01:07.424057 | orchestrator | 2025-07-12 16:01:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:01:10.452659 | orchestrator | 2025-07-12 16:01:10 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:01:10.452767 | orchestrator | 2025-07-12 16:01:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:01:13.494314 | orchestrator | 2025-07-12 16:01:13 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:01:13.494424 | orchestrator | 2025-07-12 16:01:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:01:16.531001 | orchestrator | 2025-07-12 16:01:16 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:01:16.531139 | orchestrator | 2025-07-12 16:01:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:01:19.573630 | orchestrator | 2025-07-12 16:01:19 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:01:19.573731 | orchestrator | 2025-07-12 16:01:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:01:22.611909 | orchestrator | 2025-07-12 16:01:22 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:01:22.612012 | orchestrator | 2025-07-12 16:01:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:01:25.649871 | orchestrator | 2025-07-12 16:01:25 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:01:25.649986 | orchestrator | 2025-07-12 16:01:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:01:28.700437 | orchestrator | 2025-07-12 16:01:28 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:01:28.700532 | orchestrator | 2025-07-12 16:01:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:01:31.741006 | orchestrator | 2025-07-12 16:01:31 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:01:31.741106 | orchestrator | 2025-07-12 16:01:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:01:34.784523 | orchestrator | 2025-07-12 16:01:34 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:01:34.784626 | orchestrator | 2025-07-12 16:01:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:01:37.831262 | orchestrator | 2025-07-12 16:01:37 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:01:37.831403 | orchestrator | 2025-07-12 16:01:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:01:40.870524 | orchestrator | 2025-07-12 16:01:40 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:01:40.870620 | orchestrator | 2025-07-12 16:01:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:01:43.910913 | orchestrator | 2025-07-12 16:01:43 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:01:43.911017 | orchestrator | 2025-07-12 16:01:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:01:46.952106 | orchestrator | 2025-07-12 16:01:46 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:01:46.952205 | orchestrator | 2025-07-12 16:01:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:01:49.988393 | orchestrator | 2025-07-12 16:01:49 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:01:49.988492 | orchestrator | 2025-07-12 16:01:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:01:53.040403 | orchestrator | 2025-07-12 16:01:53 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:01:53.040493 | orchestrator | 2025-07-12 16:01:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:01:56.112810 | orchestrator | 2025-07-12 16:01:56 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:01:56.112915 | orchestrator | 2025-07-12 16:01:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:01:59.138249 | orchestrator | 2025-07-12 16:01:59 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:01:59.138351 | orchestrator | 2025-07-12 16:01:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:02:02.188305 | orchestrator | 2025-07-12 16:02:02 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:02:02.188407 | orchestrator | 2025-07-12 16:02:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:02:05.232296 | orchestrator | 2025-07-12 16:02:05 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:02:05.232397 | orchestrator | 2025-07-12 16:02:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:02:08.270548 | orchestrator | 2025-07-12 16:02:08 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:02:08.270651 | orchestrator | 2025-07-12 16:02:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:02:11.316557 | orchestrator | 2025-07-12 16:02:11 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:02:11.316676 | orchestrator | 2025-07-12 16:02:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:02:14.374756 | orchestrator | 2025-07-12 16:02:14 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:02:14.377764 | orchestrator | 2025-07-12 16:02:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:02:17.418954 | orchestrator | 2025-07-12 16:02:17 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:02:17.419055 | orchestrator | 2025-07-12 16:02:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:02:20.457272 | orchestrator | 2025-07-12 16:02:20 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:02:20.457373 | orchestrator | 2025-07-12 16:02:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:02:23.499243 | orchestrator | 2025-07-12 16:02:23 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:02:23.499343 | orchestrator | 2025-07-12 16:02:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:02:26.545446 | orchestrator | 2025-07-12 16:02:26 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:02:26.545546 | orchestrator | 2025-07-12 16:02:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:02:29.590305 | orchestrator | 2025-07-12 16:02:29 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:02:29.590408 | orchestrator | 2025-07-12 16:02:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:02:32.632096 | orchestrator | 2025-07-12 16:02:32 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:02:32.632195 | orchestrator | 2025-07-12 16:02:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:02:35.675835 | orchestrator | 2025-07-12 16:02:35 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:02:35.675908 | orchestrator | 2025-07-12 16:02:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:02:38.722757 | orchestrator | 2025-07-12 16:02:38 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:02:38.722882 | orchestrator | 2025-07-12 16:02:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:02:41.773589 | orchestrator | 2025-07-12 16:02:41 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:02:41.773766 | orchestrator | 2025-07-12 16:02:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:02:44.824142 | orchestrator | 2025-07-12 16:02:44 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:02:44.824247 | orchestrator | 2025-07-12 16:02:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:02:47.870393 | orchestrator | 2025-07-12 16:02:47 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:02:47.870501 | orchestrator | 2025-07-12 16:02:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:02:50.915393 | orchestrator | 2025-07-12 16:02:50 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:02:50.915497 | orchestrator | 2025-07-12 16:02:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:02:53.957325 | orchestrator | 2025-07-12 16:02:53 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:02:53.957429 | orchestrator | 2025-07-12 16:02:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:02:57.012158 | orchestrator | 2025-07-12 16:02:57 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:02:57.013166 | orchestrator | 2025-07-12 16:02:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:03:00.062516 | orchestrator | 2025-07-12 16:03:00 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:03:00.062677 | orchestrator | 2025-07-12 16:03:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:03:03.118556 | orchestrator | 2025-07-12 16:03:03 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:03:03.118710 | orchestrator | 2025-07-12 16:03:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:03:06.162922 | orchestrator | 2025-07-12 16:03:06 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:03:06.163026 | orchestrator | 2025-07-12 16:03:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:03:09.217736 | orchestrator | 2025-07-12 16:03:09 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:03:09.217846 | orchestrator | 2025-07-12 16:03:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:03:12.255120 | orchestrator | 2025-07-12 16:03:12 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:03:12.255243 | orchestrator | 2025-07-12 16:03:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:03:15.302208 | orchestrator | 2025-07-12 16:03:15 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:03:15.302300 | orchestrator | 2025-07-12 16:03:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:03:18.345213 | orchestrator | 2025-07-12 16:03:18 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:03:18.345310 | orchestrator | 2025-07-12 16:03:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:03:21.388119 | orchestrator | 2025-07-12 16:03:21 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:03:21.388230 | orchestrator | 2025-07-12 16:03:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:03:24.437749 | orchestrator | 2025-07-12 16:03:24 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:03:24.437850 | orchestrator | 2025-07-12 16:03:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:03:27.488473 | orchestrator | 2025-07-12 16:03:27 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:03:27.488615 | orchestrator | 2025-07-12 16:03:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:03:30.535599 | orchestrator | 2025-07-12 16:03:30 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:03:30.535702 | orchestrator | 2025-07-12 16:03:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:03:33.580029 | orchestrator | 2025-07-12 16:03:33 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:03:33.580139 | orchestrator | 2025-07-12 16:03:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:03:36.621798 | orchestrator | 2025-07-12 16:03:36 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:03:36.621902 | orchestrator | 2025-07-12 16:03:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:03:39.666113 | orchestrator | 2025-07-12 16:03:39 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:03:39.666217 | orchestrator | 2025-07-12 16:03:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:03:42.698931 | orchestrator | 2025-07-12 16:03:42 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:03:42.699022 | orchestrator | 2025-07-12 16:03:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:03:45.741871 | orchestrator | 2025-07-12 16:03:45 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:03:45.742889 | orchestrator | 2025-07-12 16:03:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:03:48.791081 | orchestrator | 2025-07-12 16:03:48 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state STARTED 2025-07-12 16:03:48.791204 | orchestrator | 2025-07-12 16:03:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 16:03:51.842963 | orchestrator | 2025-07-12 16:03:51.843049 | orchestrator | 2025-07-12 16:03:51 | INFO  | Task f1c3e972-7877-4282-a469-5e89385ef74c is in state SUCCESS 2025-07-12 16:03:51.844661 | orchestrator | 2025-07-12 16:03:51.844683 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 16:03:51.844688 | orchestrator | 2025-07-12 16:03:51.844692 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 16:03:51.844697 | orchestrator | Saturday 12 July 2025 15:58:48 +0000 (0:00:00.283) 0:00:00.283 ********* 2025-07-12 16:03:51.844702 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:03:51.844707 | orchestrator | ok: [testbed-node-1] 2025-07-12 16:03:51.844711 | orchestrator | ok: [testbed-node-2] 2025-07-12 16:03:51.844715 | orchestrator | 2025-07-12 16:03:51.844719 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 16:03:51.844723 | orchestrator | Saturday 12 July 2025 15:58:48 +0000 (0:00:00.289) 0:00:00.572 ********* 2025-07-12 16:03:51.844727 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-07-12 16:03:51.844731 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-07-12 16:03:51.844735 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-07-12 16:03:51.844739 | orchestrator | 2025-07-12 16:03:51.844743 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-07-12 16:03:51.844747 | orchestrator | 2025-07-12 16:03:51.844750 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-07-12 16:03:51.844754 | orchestrator | Saturday 12 July 2025 15:58:49 +0000 (0:00:00.400) 0:00:00.972 ********* 2025-07-12 16:03:51.844758 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 16:03:51.844763 | orchestrator | 2025-07-12 16:03:51.844766 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-07-12 16:03:51.844770 | orchestrator | Saturday 12 July 2025 15:58:49 +0000 (0:00:00.557) 0:00:01.529 ********* 2025-07-12 16:03:51.844775 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-07-12 16:03:51.844779 | orchestrator | 2025-07-12 16:03:51.844806 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-07-12 16:03:51.844811 | orchestrator | Saturday 12 July 2025 15:58:54 +0000 (0:00:04.429) 0:00:05.958 ********* 2025-07-12 16:03:51.844815 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-07-12 16:03:51.844819 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-07-12 16:03:51.844823 | orchestrator | 2025-07-12 16:03:51.844827 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-07-12 16:03:51.844830 | orchestrator | Saturday 12 July 2025 15:59:00 +0000 (0:00:06.676) 0:00:12.635 ********* 2025-07-12 16:03:51.844862 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 16:03:51.844867 | orchestrator | 2025-07-12 16:03:51.844870 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-07-12 16:03:51.844874 | orchestrator | Saturday 12 July 2025 15:59:04 +0000 (0:00:03.679) 0:00:16.315 ********* 2025-07-12 16:03:51.844878 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 16:03:51.844882 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-07-12 16:03:51.844886 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-07-12 16:03:51.844890 | orchestrator | 2025-07-12 16:03:51.844894 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-07-12 16:03:51.844897 | orchestrator | Saturday 12 July 2025 15:59:13 +0000 (0:00:09.568) 0:00:25.883 ********* 2025-07-12 16:03:51.844901 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 16:03:51.844905 | orchestrator | 2025-07-12 16:03:51.844909 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-07-12 16:03:51.844913 | orchestrator | Saturday 12 July 2025 15:59:17 +0000 (0:00:03.892) 0:00:29.775 ********* 2025-07-12 16:03:51.844916 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-07-12 16:03:51.844920 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-07-12 16:03:51.844924 | orchestrator | 2025-07-12 16:03:51.844928 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-07-12 16:03:51.844931 | orchestrator | Saturday 12 July 2025 15:59:26 +0000 (0:00:08.272) 0:00:38.048 ********* 2025-07-12 16:03:51.844935 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-07-12 16:03:51.844939 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-07-12 16:03:51.844942 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-07-12 16:03:51.844946 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-07-12 16:03:51.844950 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-07-12 16:03:51.844953 | orchestrator | 2025-07-12 16:03:51.844957 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-07-12 16:03:51.844961 | orchestrator | Saturday 12 July 2025 15:59:43 +0000 (0:00:17.638) 0:00:55.686 ********* 2025-07-12 16:03:51.844964 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 16:03:51.844968 | orchestrator | 2025-07-12 16:03:51.844972 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-07-12 16:03:51.844976 | orchestrator | Saturday 12 July 2025 15:59:44 +0000 (0:00:00.516) 0:00:56.203 ********* 2025-07-12 16:03:51.844979 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:03:51.844983 | orchestrator | 2025-07-12 16:03:51.844987 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-07-12 16:03:51.844990 | orchestrator | Saturday 12 July 2025 15:59:49 +0000 (0:00:05.365) 0:01:01.569 ********* 2025-07-12 16:03:51.844994 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:03:51.844998 | orchestrator | 2025-07-12 16:03:51.845002 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-07-12 16:03:51.845012 | orchestrator | Saturday 12 July 2025 15:59:54 +0000 (0:00:04.734) 0:01:06.303 ********* 2025-07-12 16:03:51.845016 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:03:51.845020 | orchestrator | 2025-07-12 16:03:51.845024 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-07-12 16:03:51.845027 | orchestrator | Saturday 12 July 2025 15:59:57 +0000 (0:00:03.616) 0:01:09.920 ********* 2025-07-12 16:03:51.845031 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-07-12 16:03:51.845035 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-07-12 16:03:51.845039 | orchestrator | 2025-07-12 16:03:51.845042 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-07-12 16:03:51.845046 | orchestrator | Saturday 12 July 2025 16:00:09 +0000 (0:00:11.558) 0:01:21.478 ********* 2025-07-12 16:03:51.845060 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-07-12 16:03:51.845064 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-07-12 16:03:51.845069 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-07-12 16:03:51.845074 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-07-12 16:03:51.845078 | orchestrator | 2025-07-12 16:03:51.845081 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-07-12 16:03:51.845085 | orchestrator | Saturday 12 July 2025 16:00:27 +0000 (0:00:17.846) 0:01:39.325 ********* 2025-07-12 16:03:51.845089 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:03:51.845093 | orchestrator | 2025-07-12 16:03:51.845099 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-07-12 16:03:51.845103 | orchestrator | Saturday 12 July 2025 16:00:32 +0000 (0:00:05.199) 0:01:44.524 ********* 2025-07-12 16:03:51.845107 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:03:51.845111 | orchestrator | 2025-07-12 16:03:51.845114 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-07-12 16:03:51.845118 | orchestrator | Saturday 12 July 2025 16:00:38 +0000 (0:00:05.536) 0:01:50.061 ********* 2025-07-12 16:03:51.845122 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:03:51.845125 | orchestrator | 2025-07-12 16:03:51.845129 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-07-12 16:03:51.845133 | orchestrator | Saturday 12 July 2025 16:00:38 +0000 (0:00:00.202) 0:01:50.264 ********* 2025-07-12 16:03:51.845136 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:03:51.845140 | orchestrator | 2025-07-12 16:03:51.845144 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-07-12 16:03:51.845148 | orchestrator | Saturday 12 July 2025 16:00:43 +0000 (0:00:05.449) 0:01:55.713 ********* 2025-07-12 16:03:51.845151 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 16:03:51.845155 | orchestrator | 2025-07-12 16:03:51.845159 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-07-12 16:03:51.845162 | orchestrator | Saturday 12 July 2025 16:00:45 +0000 (0:00:01.246) 0:01:56.959 ********* 2025-07-12 16:03:51.845166 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:03:51.845170 | orchestrator | changed: [testbed-node-2] 2025-07-12 16:03:51.845174 | orchestrator | changed: [testbed-node-1] 2025-07-12 16:03:51.845177 | orchestrator | 2025-07-12 16:03:51.845181 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-07-12 16:03:51.845185 | orchestrator | Saturday 12 July 2025 16:00:50 +0000 (0:00:05.105) 0:02:02.064 ********* 2025-07-12 16:03:51.845189 | orchestrator | changed: [testbed-node-2] 2025-07-12 16:03:51.845192 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:03:51.845196 | orchestrator | changed: [testbed-node-1] 2025-07-12 16:03:51.845200 | orchestrator | 2025-07-12 16:03:51.845203 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-07-12 16:03:51.845207 | orchestrator | Saturday 12 July 2025 16:00:55 +0000 (0:00:05.151) 0:02:07.215 ********* 2025-07-12 16:03:51.845211 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:03:51.845215 | orchestrator | changed: [testbed-node-1] 2025-07-12 16:03:51.845218 | orchestrator | changed: [testbed-node-2] 2025-07-12 16:03:51.845222 | orchestrator | 2025-07-12 16:03:51.845226 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-07-12 16:03:51.845229 | orchestrator | Saturday 12 July 2025 16:00:56 +0000 (0:00:00.783) 0:02:07.999 ********* 2025-07-12 16:03:51.845233 | orchestrator | ok: [testbed-node-2] 2025-07-12 16:03:51.845246 | orchestrator | ok: [testbed-node-1] 2025-07-12 16:03:51.845250 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:03:51.845254 | orchestrator | 2025-07-12 16:03:51.845258 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-07-12 16:03:51.845263 | orchestrator | Saturday 12 July 2025 16:00:58 +0000 (0:00:02.361) 0:02:10.361 ********* 2025-07-12 16:03:51.845267 | orchestrator | changed: [testbed-node-2] 2025-07-12 16:03:51.845272 | orchestrator | changed: [testbed-node-1] 2025-07-12 16:03:51.845276 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:03:51.845280 | orchestrator | 2025-07-12 16:03:51.845285 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-07-12 16:03:51.845289 | orchestrator | Saturday 12 July 2025 16:00:59 +0000 (0:00:01.303) 0:02:11.664 ********* 2025-07-12 16:03:51.845293 | orchestrator | changed: [testbed-node-1] 2025-07-12 16:03:51.845297 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:03:51.845302 | orchestrator | changed: [testbed-node-2] 2025-07-12 16:03:51.845306 | orchestrator | 2025-07-12 16:03:51.845311 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-07-12 16:03:51.845315 | orchestrator | Saturday 12 July 2025 16:01:00 +0000 (0:00:01.236) 0:02:12.901 ********* 2025-07-12 16:03:51.845319 | orchestrator | changed: [testbed-node-1] 2025-07-12 16:03:51.845324 | orchestrator | changed: [testbed-node-2] 2025-07-12 16:03:51.845328 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:03:51.845332 | orchestrator | 2025-07-12 16:03:51.845339 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-07-12 16:03:51.845344 | orchestrator | Saturday 12 July 2025 16:01:02 +0000 (0:00:02.037) 0:02:14.939 ********* 2025-07-12 16:03:51.845348 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:03:51.845353 | orchestrator | changed: [testbed-node-1] 2025-07-12 16:03:51.845357 | orchestrator | changed: [testbed-node-2] 2025-07-12 16:03:51.845361 | orchestrator | 2025-07-12 16:03:51.845366 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-07-12 16:03:51.845370 | orchestrator | Saturday 12 July 2025 16:01:04 +0000 (0:00:01.701) 0:02:16.641 ********* 2025-07-12 16:03:51.845375 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:03:51.845379 | orchestrator | ok: [testbed-node-1] 2025-07-12 16:03:51.845384 | orchestrator | ok: [testbed-node-2] 2025-07-12 16:03:51.845388 | orchestrator | 2025-07-12 16:03:51.845392 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-07-12 16:03:51.845396 | orchestrator | Saturday 12 July 2025 16:01:05 +0000 (0:00:00.615) 0:02:17.257 ********* 2025-07-12 16:03:51.845401 | orchestrator | ok: [testbed-node-1] 2025-07-12 16:03:51.845405 | orchestrator | ok: [testbed-node-2] 2025-07-12 16:03:51.845409 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:03:51.845414 | orchestrator | 2025-07-12 16:03:51.845418 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-07-12 16:03:51.845422 | orchestrator | Saturday 12 July 2025 16:01:08 +0000 (0:00:02.791) 0:02:20.048 ********* 2025-07-12 16:03:51.845427 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 16:03:51.845431 | orchestrator | 2025-07-12 16:03:51.845436 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-07-12 16:03:51.845440 | orchestrator | Saturday 12 July 2025 16:01:08 +0000 (0:00:00.657) 0:02:20.706 ********* 2025-07-12 16:03:51.845444 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:03:51.845448 | orchestrator | 2025-07-12 16:03:51.845461 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-07-12 16:03:51.845466 | orchestrator | Saturday 12 July 2025 16:01:12 +0000 (0:00:04.105) 0:02:24.811 ********* 2025-07-12 16:03:51.845669 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:03:51.845674 | orchestrator | 2025-07-12 16:03:51.845677 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-07-12 16:03:51.845681 | orchestrator | Saturday 12 July 2025 16:01:16 +0000 (0:00:03.323) 0:02:28.134 ********* 2025-07-12 16:03:51.845695 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-07-12 16:03:51.845711 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-07-12 16:03:51.845715 | orchestrator | 2025-07-12 16:03:51.845719 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-07-12 16:03:51.845723 | orchestrator | Saturday 12 July 2025 16:01:23 +0000 (0:00:07.390) 0:02:35.525 ********* 2025-07-12 16:03:51.845727 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:03:51.845731 | orchestrator | 2025-07-12 16:03:51.845734 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-07-12 16:03:51.845738 | orchestrator | Saturday 12 July 2025 16:01:27 +0000 (0:00:03.426) 0:02:38.952 ********* 2025-07-12 16:03:51.845742 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:03:51.845746 | orchestrator | ok: [testbed-node-1] 2025-07-12 16:03:51.845749 | orchestrator | ok: [testbed-node-2] 2025-07-12 16:03:51.845753 | orchestrator | 2025-07-12 16:03:51.845757 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-07-12 16:03:51.845761 | orchestrator | Saturday 12 July 2025 16:01:27 +0000 (0:00:00.309) 0:02:39.262 ********* 2025-07-12 16:03:51.845767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 16:03:51.845777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 16:03:51.845782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 16:03:51.845796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 16:03:51.845807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 16:03:51.845811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 16:03:51.845816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 16:03:51.845820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 16:03:51.845829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 16:03:51.845834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 16:03:51.845850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 16:03:51.845855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 16:03:51.845859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 16:03:51.845864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 16:03:51.845871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 16:03:51.845875 | orchestrator | 2025-07-12 16:03:51.845878 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-07-12 16:03:51.845882 | orchestrator | Saturday 12 July 2025 16:01:29 +0000 (0:00:02.573) 0:02:41.836 ********* 2025-07-12 16:03:51.845886 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:03:51.845890 | orchestrator | 2025-07-12 16:03:51.845894 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-07-12 16:03:51.845898 | orchestrator | Saturday 12 July 2025 16:01:30 +0000 (0:00:00.288) 0:02:42.125 ********* 2025-07-12 16:03:51.845902 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:03:51.845905 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:03:51.845909 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:03:51.845916 | orchestrator | 2025-07-12 16:03:51.845919 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-07-12 16:03:51.845923 | orchestrator | Saturday 12 July 2025 16:01:30 +0000 (0:00:00.307) 0:02:42.432 ********* 2025-07-12 16:03:51.845930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 16:03:51.845934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 16:03:51.845938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 16:03:51.845942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 16:03:51.845946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 16:03:51.845950 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:03:51.845957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 16:03:51.845983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 16:03:51.845987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 16:03:51.845991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 16:03:51.845995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 16:03:51.845999 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:03:51.846007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 16:03:51.846051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 16:03:51.846059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 16:03:51.846064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 16:03:51.846068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 16:03:51.846072 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:03:51.846076 | orchestrator | 2025-07-12 16:03:51.846079 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-07-12 16:03:51.846083 | orchestrator | Saturday 12 July 2025 16:01:31 +0000 (0:00:00.671) 0:02:43.103 ********* 2025-07-12 16:03:51.846087 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 16:03:51.846091 | orchestrator | 2025-07-12 16:03:51.846095 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-07-12 16:03:51.846098 | orchestrator | Saturday 12 July 2025 16:01:31 +0000 (0:00:00.539) 0:02:43.643 ********* 2025-07-12 16:03:51.846102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 16:03:51.846120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 16:03:51.846134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 16:03:51.846138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 16:03:51.846143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 16:03:51.846147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 16:03:51.846151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 16:03:51.846167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 16:03:51.846171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 16:03:51.846200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 16:03:51.846205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 16:03:51.846209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 16:03:51.846213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 16:03:51.846234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 16:03:51.846239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 16:03:51.846243 | orchestrator | 2025-07-12 16:03:51.846247 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-07-12 16:03:51.846251 | orchestrator | Saturday 12 July 2025 16:01:36 +0000 (0:00:05.242) 0:02:48.886 ********* 2025-07-12 16:03:51.846257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 16:03:51.846261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 16:03:51.846265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 16:03:51.846269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 16:03:51.846302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 16:03:51.846308 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:03:51.846602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 16:03:51.846616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 16:03:51.846622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 16:03:51.846626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 16:03:51.846651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 16:03:51.846656 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:03:51.846667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 16:03:51.846672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 16:03:51.846690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 16:03:51.846694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 16:03:51.846699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 16:03:51.846712 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:03:51.846717 | orchestrator | 2025-07-12 16:03:51.846721 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-07-12 16:03:51.846725 | orchestrator | Saturday 12 July 2025 16:01:37 +0000 (0:00:00.670) 0:02:49.556 ********* 2025-07-12 16:03:51.846729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 16:03:51.846737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 16:03:51.846742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 16:03:51.846748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 16:03:51.846753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 16:03:51.846757 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:03:51.846761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 16:03:51.846772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 16:03:51.846780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 16:03:51.846785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 16:03:51.846798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 16:03:51.846803 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:03:51.846807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 16:03:51.846815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 16:03:51.846819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 16:03:51.846827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 16:03:51.846831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 16:03:51.846836 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:03:51.846840 | orchestrator | 2025-07-12 16:03:51.846844 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-07-12 16:03:51.846848 | orchestrator | Saturday 12 July 2025 16:01:38 +0000 (0:00:00.875) 0:02:50.432 ********* 2025-07-12 16:03:51.846862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 16:03:51.846870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 16:03:51.846875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 16:03:51.846883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 16:03:51.846888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 16:03:51.846895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 16:03:51.846899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 16:03:51.846906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 16:03:51.846910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 16:03:51.846915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 16:03:51.846923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 16:03:51.846927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 16:03:51.846934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 16:03:51.846941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 16:03:51.846945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 16:03:51.846949 | orchestrator | 2025-07-12 16:03:51.846953 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-07-12 16:03:51.846957 | orchestrator | Saturday 12 July 2025 16:01:44 +0000 (0:00:05.521) 0:02:55.953 ********* 2025-07-12 16:03:51.846962 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-07-12 16:03:51.846966 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-07-12 16:03:51.846970 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-07-12 16:03:51.846974 | orchestrator | 2025-07-12 16:03:51.846978 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-07-12 16:03:51.846983 | orchestrator | Saturday 12 July 2025 16:01:45 +0000 (0:00:01.652) 0:02:57.606 ********* 2025-07-12 16:03:51.846990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 16:03:51.846994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 16:03:51.847003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 16:03:51.847008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 16:03:51.847012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 16:03:51.847016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 16:03:51.847023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 16:03:51.847027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 16:03:51.847033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 16:03:51.847041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 16:03:51.847045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 16:03:51.847049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 16:03:51.847062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 16:03:51.847071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 16:03:51.847075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 16:03:51.847082 | orchestrator | 2025-07-12 16:03:51.847086 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-07-12 16:03:51.847092 | orchestrator | Saturday 12 July 2025 16:02:01 +0000 (0:00:15.893) 0:03:13.499 ********* 2025-07-12 16:03:51.847096 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:03:51.847100 | orchestrator | changed: [testbed-node-1] 2025-07-12 16:03:51.847104 | orchestrator | changed: [testbed-node-2] 2025-07-12 16:03:51.847107 | orchestrator | 2025-07-12 16:03:51.847111 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-07-12 16:03:51.847115 | orchestrator | Saturday 12 July 2025 16:02:03 +0000 (0:00:01.521) 0:03:15.020 ********* 2025-07-12 16:03:51.847119 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-07-12 16:03:51.847122 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-07-12 16:03:51.847126 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-07-12 16:03:51.847130 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-07-12 16:03:51.847134 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-07-12 16:03:51.847137 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-07-12 16:03:51.847141 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-07-12 16:03:51.847145 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-07-12 16:03:51.847149 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-07-12 16:03:51.847152 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-07-12 16:03:51.847156 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-07-12 16:03:51.847160 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-07-12 16:03:51.847164 | orchestrator | 2025-07-12 16:03:51.847167 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-07-12 16:03:51.847171 | orchestrator | Saturday 12 July 2025 16:02:08 +0000 (0:00:05.102) 0:03:20.123 ********* 2025-07-12 16:03:51.847175 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-07-12 16:03:51.847179 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-07-12 16:03:51.847182 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-07-12 16:03:51.847186 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-07-12 16:03:51.847190 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-07-12 16:03:51.847193 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-07-12 16:03:51.847197 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-07-12 16:03:51.847201 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-07-12 16:03:51.847205 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-07-12 16:03:51.847208 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-07-12 16:03:51.847212 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-07-12 16:03:51.847216 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-07-12 16:03:51.847220 | orchestrator | 2025-07-12 16:03:51.847223 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-07-12 16:03:51.847227 | orchestrator | Saturday 12 July 2025 16:02:13 +0000 (0:00:05.148) 0:03:25.271 ********* 2025-07-12 16:03:51.847231 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-07-12 16:03:51.847235 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-07-12 16:03:51.847240 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-07-12 16:03:51.847244 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-07-12 16:03:51.847253 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-07-12 16:03:51.847258 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-07-12 16:03:51.847262 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-07-12 16:03:51.847267 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-07-12 16:03:51.847273 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-07-12 16:03:51.847278 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-07-12 16:03:51.847282 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-07-12 16:03:51.847286 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-07-12 16:03:51.847291 | orchestrator | 2025-07-12 16:03:51.847295 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-07-12 16:03:51.847299 | orchestrator | Saturday 12 July 2025 16:02:18 +0000 (0:00:04.782) 0:03:30.053 ********* 2025-07-12 16:03:51.847313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 16:03:51.847319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 16:03:51.847323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 16:03:51.847328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 16:03:51.847346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 16:03:51.847351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 16:03:51.847357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 16:03:51.847362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 16:03:51.847367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 16:03:51.847371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 16:03:51.847401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 16:03:51.847408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 16:03:51.847413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 16:03:51.847426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 16:03:51.847431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 16:03:51.847436 | orchestrator | 2025-07-12 16:03:51.847440 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-07-12 16:03:51.847445 | orchestrator | Saturday 12 July 2025 16:02:21 +0000 (0:00:03.798) 0:03:33.852 ********* 2025-07-12 16:03:51.847449 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:03:51.847453 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:03:51.847458 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:03:51.847462 | orchestrator | 2025-07-12 16:03:51.847466 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-07-12 16:03:51.847488 | orchestrator | Saturday 12 July 2025 16:02:22 +0000 (0:00:00.306) 0:03:34.159 ********* 2025-07-12 16:03:51.847493 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:03:51.847502 | orchestrator | 2025-07-12 16:03:51.847506 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-07-12 16:03:51.847510 | orchestrator | Saturday 12 July 2025 16:02:24 +0000 (0:00:02.197) 0:03:36.356 ********* 2025-07-12 16:03:51.847514 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:03:51.847519 | orchestrator | 2025-07-12 16:03:51.847523 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-07-12 16:03:51.847527 | orchestrator | Saturday 12 July 2025 16:02:27 +0000 (0:00:02.691) 0:03:39.048 ********* 2025-07-12 16:03:51.847532 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:03:51.847536 | orchestrator | 2025-07-12 16:03:51.847540 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-07-12 16:03:51.847545 | orchestrator | Saturday 12 July 2025 16:02:29 +0000 (0:00:02.454) 0:03:41.502 ********* 2025-07-12 16:03:51.847549 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:03:51.847553 | orchestrator | 2025-07-12 16:03:51.847557 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-07-12 16:03:51.847562 | orchestrator | Saturday 12 July 2025 16:02:31 +0000 (0:00:02.344) 0:03:43.846 ********* 2025-07-12 16:03:51.847566 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:03:51.847570 | orchestrator | 2025-07-12 16:03:51.847574 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-07-12 16:03:51.847579 | orchestrator | Saturday 12 July 2025 16:02:53 +0000 (0:00:21.945) 0:04:05.792 ********* 2025-07-12 16:03:51.847583 | orchestrator | 2025-07-12 16:03:51.847588 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-07-12 16:03:51.847593 | orchestrator | Saturday 12 July 2025 16:02:53 +0000 (0:00:00.061) 0:04:05.854 ********* 2025-07-12 16:03:51.847597 | orchestrator | 2025-07-12 16:03:51.847601 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-07-12 16:03:51.847604 | orchestrator | Saturday 12 July 2025 16:02:53 +0000 (0:00:00.065) 0:04:05.919 ********* 2025-07-12 16:03:51.847608 | orchestrator | 2025-07-12 16:03:51.847612 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-07-12 16:03:51.847618 | orchestrator | Saturday 12 July 2025 16:02:54 +0000 (0:00:00.065) 0:04:05.985 ********* 2025-07-12 16:03:51.847622 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:03:51.847626 | orchestrator | changed: [testbed-node-2] 2025-07-12 16:03:51.847629 | orchestrator | changed: [testbed-node-1] 2025-07-12 16:03:51.847633 | orchestrator | 2025-07-12 16:03:51.847637 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-07-12 16:03:51.847640 | orchestrator | Saturday 12 July 2025 16:03:10 +0000 (0:00:16.371) 0:04:22.356 ********* 2025-07-12 16:03:51.847644 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:03:51.847648 | orchestrator | changed: [testbed-node-2] 2025-07-12 16:03:51.847652 | orchestrator | changed: [testbed-node-1] 2025-07-12 16:03:51.847655 | orchestrator | 2025-07-12 16:03:51.847659 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-07-12 16:03:51.847663 | orchestrator | Saturday 12 July 2025 16:03:21 +0000 (0:00:11.508) 0:04:33.865 ********* 2025-07-12 16:03:51.847667 | orchestrator | changed: [testbed-node-2] 2025-07-12 16:03:51.847671 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:03:51.847674 | orchestrator | changed: [testbed-node-1] 2025-07-12 16:03:51.847678 | orchestrator | 2025-07-12 16:03:51.847682 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-07-12 16:03:51.847686 | orchestrator | Saturday 12 July 2025 16:03:32 +0000 (0:00:10.347) 0:04:44.212 ********* 2025-07-12 16:03:51.847689 | orchestrator | changed: [testbed-node-2] 2025-07-12 16:03:51.847693 | orchestrator | changed: [testbed-node-1] 2025-07-12 16:03:51.847697 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:03:51.847701 | orchestrator | 2025-07-12 16:03:51.847704 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-07-12 16:03:51.847708 | orchestrator | Saturday 12 July 2025 16:03:40 +0000 (0:00:08.316) 0:04:52.529 ********* 2025-07-12 16:03:51.847712 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:03:51.847719 | orchestrator | changed: [testbed-node-2] 2025-07-12 16:03:51.847723 | orchestrator | changed: [testbed-node-1] 2025-07-12 16:03:51.847727 | orchestrator | 2025-07-12 16:03:51.847733 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 16:03:51.847738 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-12 16:03:51.847742 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 16:03:51.847746 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 16:03:51.847750 | orchestrator | 2025-07-12 16:03:51.847753 | orchestrator | 2025-07-12 16:03:51.847757 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 16:03:51.847761 | orchestrator | Saturday 12 July 2025 16:03:50 +0000 (0:00:10.342) 0:05:02.871 ********* 2025-07-12 16:03:51.847765 | orchestrator | =============================================================================== 2025-07-12 16:03:51.847768 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 21.95s 2025-07-12 16:03:51.847772 | orchestrator | octavia : Add rules for security groups -------------------------------- 17.85s 2025-07-12 16:03:51.847776 | orchestrator | octavia : Adding octavia related roles --------------------------------- 17.64s 2025-07-12 16:03:51.847780 | orchestrator | octavia : Restart octavia-api container -------------------------------- 16.37s 2025-07-12 16:03:51.847783 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 15.89s 2025-07-12 16:03:51.847787 | orchestrator | octavia : Create security groups for octavia --------------------------- 11.56s 2025-07-12 16:03:51.847791 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.51s 2025-07-12 16:03:51.847795 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.35s 2025-07-12 16:03:51.847798 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.34s 2025-07-12 16:03:51.847802 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 9.57s 2025-07-12 16:03:51.847806 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 8.32s 2025-07-12 16:03:51.847809 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 8.27s 2025-07-12 16:03:51.847813 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.39s 2025-07-12 16:03:51.847817 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.68s 2025-07-12 16:03:51.847821 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.54s 2025-07-12 16:03:51.847824 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.52s 2025-07-12 16:03:51.847828 | orchestrator | octavia : Update loadbalancer management subnet ------------------------- 5.45s 2025-07-12 16:03:51.847832 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.37s 2025-07-12 16:03:51.847836 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.24s 2025-07-12 16:03:51.847839 | orchestrator | octavia : Create loadbalancer management network ------------------------ 5.20s 2025-07-12 16:03:51.847843 | orchestrator | 2025-07-12 16:03:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 16:03:54.883628 | orchestrator | 2025-07-12 16:03:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 16:03:57.930107 | orchestrator | 2025-07-12 16:03:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 16:04:00.971228 | orchestrator | 2025-07-12 16:04:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 16:04:04.009550 | orchestrator | 2025-07-12 16:04:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 16:04:07.052649 | orchestrator | 2025-07-12 16:04:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 16:04:10.096590 | orchestrator | 2025-07-12 16:04:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 16:04:13.135785 | orchestrator | 2025-07-12 16:04:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 16:04:16.180394 | orchestrator | 2025-07-12 16:04:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 16:04:19.218373 | orchestrator | 2025-07-12 16:04:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 16:04:22.255699 | orchestrator | 2025-07-12 16:04:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 16:04:25.299322 | orchestrator | 2025-07-12 16:04:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 16:04:28.344323 | orchestrator | 2025-07-12 16:04:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 16:04:31.387756 | orchestrator | 2025-07-12 16:04:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 16:04:34.442542 | orchestrator | 2025-07-12 16:04:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 16:04:37.486674 | orchestrator | 2025-07-12 16:04:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 16:04:40.534910 | orchestrator | 2025-07-12 16:04:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 16:04:43.579837 | orchestrator | 2025-07-12 16:04:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 16:04:46.622787 | orchestrator | 2025-07-12 16:04:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 16:04:49.664349 | orchestrator | 2025-07-12 16:04:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 16:04:52.709978 | orchestrator | 2025-07-12 16:04:53.008733 | orchestrator | 2025-07-12 16:04:53.016040 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sat Jul 12 16:04:53 UTC 2025 2025-07-12 16:04:53.016088 | orchestrator | 2025-07-12 16:04:53.488189 | orchestrator | ok: Runtime: 0:34:43.299694 2025-07-12 16:04:53.759763 | 2025-07-12 16:04:53.759944 | TASK [Bootstrap services] 2025-07-12 16:04:54.567788 | orchestrator | 2025-07-12 16:04:54.567943 | orchestrator | # BOOTSTRAP 2025-07-12 16:04:54.567959 | orchestrator | 2025-07-12 16:04:54.567969 | orchestrator | + set -e 2025-07-12 16:04:54.567978 | orchestrator | + echo 2025-07-12 16:04:54.567987 | orchestrator | + echo '# BOOTSTRAP' 2025-07-12 16:04:54.568004 | orchestrator | + echo 2025-07-12 16:04:54.568046 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-07-12 16:04:54.578817 | orchestrator | + set -e 2025-07-12 16:04:54.578908 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-07-12 16:04:58.518175 | orchestrator | 2025-07-12 16:04:58 | INFO  | It takes a moment until task 23252f67-786b-48df-a556-99359098ef8e (flavor-manager) has been started and output is visible here. 2025-07-12 16:05:06.395774 | orchestrator | 2025-07-12 16:05:02 | INFO  | Flavor SCS-1V-4 created 2025-07-12 16:05:06.395912 | orchestrator | 2025-07-12 16:05:02 | INFO  | Flavor SCS-2V-8 created 2025-07-12 16:05:06.395930 | orchestrator | 2025-07-12 16:05:02 | INFO  | Flavor SCS-4V-16 created 2025-07-12 16:05:06.395944 | orchestrator | 2025-07-12 16:05:02 | INFO  | Flavor SCS-8V-32 created 2025-07-12 16:05:06.395956 | orchestrator | 2025-07-12 16:05:02 | INFO  | Flavor SCS-1V-2 created 2025-07-12 16:05:06.395968 | orchestrator | 2025-07-12 16:05:03 | INFO  | Flavor SCS-2V-4 created 2025-07-12 16:05:06.395979 | orchestrator | 2025-07-12 16:05:03 | INFO  | Flavor SCS-4V-8 created 2025-07-12 16:05:06.395992 | orchestrator | 2025-07-12 16:05:03 | INFO  | Flavor SCS-8V-16 created 2025-07-12 16:05:06.396016 | orchestrator | 2025-07-12 16:05:03 | INFO  | Flavor SCS-16V-32 created 2025-07-12 16:05:06.396028 | orchestrator | 2025-07-12 16:05:03 | INFO  | Flavor SCS-1V-8 created 2025-07-12 16:05:06.396039 | orchestrator | 2025-07-12 16:05:03 | INFO  | Flavor SCS-2V-16 created 2025-07-12 16:05:06.396050 | orchestrator | 2025-07-12 16:05:03 | INFO  | Flavor SCS-4V-32 created 2025-07-12 16:05:06.396061 | orchestrator | 2025-07-12 16:05:04 | INFO  | Flavor SCS-1L-1 created 2025-07-12 16:05:06.396072 | orchestrator | 2025-07-12 16:05:04 | INFO  | Flavor SCS-2V-4-20s created 2025-07-12 16:05:06.396083 | orchestrator | 2025-07-12 16:05:04 | INFO  | Flavor SCS-4V-16-100s created 2025-07-12 16:05:06.396094 | orchestrator | 2025-07-12 16:05:04 | INFO  | Flavor SCS-1V-4-10 created 2025-07-12 16:05:06.396105 | orchestrator | 2025-07-12 16:05:04 | INFO  | Flavor SCS-2V-8-20 created 2025-07-12 16:05:06.396116 | orchestrator | 2025-07-12 16:05:04 | INFO  | Flavor SCS-4V-16-50 created 2025-07-12 16:05:06.396128 | orchestrator | 2025-07-12 16:05:04 | INFO  | Flavor SCS-8V-32-100 created 2025-07-12 16:05:06.396139 | orchestrator | 2025-07-12 16:05:05 | INFO  | Flavor SCS-1V-2-5 created 2025-07-12 16:05:06.396150 | orchestrator | 2025-07-12 16:05:05 | INFO  | Flavor SCS-2V-4-10 created 2025-07-12 16:05:06.396161 | orchestrator | 2025-07-12 16:05:05 | INFO  | Flavor SCS-4V-8-20 created 2025-07-12 16:05:06.396173 | orchestrator | 2025-07-12 16:05:05 | INFO  | Flavor SCS-8V-16-50 created 2025-07-12 16:05:06.396184 | orchestrator | 2025-07-12 16:05:05 | INFO  | Flavor SCS-16V-32-100 created 2025-07-12 16:05:06.396195 | orchestrator | 2025-07-12 16:05:05 | INFO  | Flavor SCS-1V-8-20 created 2025-07-12 16:05:06.396206 | orchestrator | 2025-07-12 16:05:05 | INFO  | Flavor SCS-2V-16-50 created 2025-07-12 16:05:06.396217 | orchestrator | 2025-07-12 16:05:06 | INFO  | Flavor SCS-4V-32-100 created 2025-07-12 16:05:06.396228 | orchestrator | 2025-07-12 16:05:06 | INFO  | Flavor SCS-1L-1-5 created 2025-07-12 16:05:08.461880 | orchestrator | 2025-07-12 16:05:08 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-07-12 16:05:18.687892 | orchestrator | 2025-07-12 16:05:18 | INFO  | Task c0836197-75ce-49da-b341-0221f583ff32 (bootstrap-basic) was prepared for execution. 2025-07-12 16:05:18.688076 | orchestrator | 2025-07-12 16:05:18 | INFO  | It takes a moment until task c0836197-75ce-49da-b341-0221f583ff32 (bootstrap-basic) has been started and output is visible here. 2025-07-12 16:06:16.842344 | orchestrator | 2025-07-12 16:06:16.842465 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-07-12 16:06:16.842481 | orchestrator | 2025-07-12 16:06:16.842494 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 16:06:16.842505 | orchestrator | Saturday 12 July 2025 16:05:22 +0000 (0:00:00.074) 0:00:00.074 ********* 2025-07-12 16:06:16.842516 | orchestrator | ok: [localhost] 2025-07-12 16:06:16.842527 | orchestrator | 2025-07-12 16:06:16.842538 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-07-12 16:06:16.842551 | orchestrator | Saturday 12 July 2025 16:05:24 +0000 (0:00:01.759) 0:00:01.834 ********* 2025-07-12 16:06:16.842562 | orchestrator | ok: [localhost] 2025-07-12 16:06:16.842573 | orchestrator | 2025-07-12 16:06:16.842584 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-07-12 16:06:16.842594 | orchestrator | Saturday 12 July 2025 16:05:32 +0000 (0:00:07.978) 0:00:09.812 ********* 2025-07-12 16:06:16.842605 | orchestrator | changed: [localhost] 2025-07-12 16:06:16.842616 | orchestrator | 2025-07-12 16:06:16.842627 | orchestrator | TASK [Get volume type local] *************************************************** 2025-07-12 16:06:16.842638 | orchestrator | Saturday 12 July 2025 16:05:39 +0000 (0:00:07.330) 0:00:17.143 ********* 2025-07-12 16:06:16.842650 | orchestrator | ok: [localhost] 2025-07-12 16:06:16.842661 | orchestrator | 2025-07-12 16:06:16.842672 | orchestrator | TASK [Create volume type local] ************************************************ 2025-07-12 16:06:16.842682 | orchestrator | Saturday 12 July 2025 16:05:46 +0000 (0:00:06.723) 0:00:23.866 ********* 2025-07-12 16:06:16.842693 | orchestrator | changed: [localhost] 2025-07-12 16:06:16.842708 | orchestrator | 2025-07-12 16:06:16.842719 | orchestrator | TASK [Create public network] *************************************************** 2025-07-12 16:06:16.842730 | orchestrator | Saturday 12 July 2025 16:05:53 +0000 (0:00:07.099) 0:00:30.966 ********* 2025-07-12 16:06:16.842740 | orchestrator | changed: [localhost] 2025-07-12 16:06:16.842751 | orchestrator | 2025-07-12 16:06:16.842762 | orchestrator | TASK [Set public network to default] ******************************************* 2025-07-12 16:06:16.842773 | orchestrator | Saturday 12 July 2025 16:05:58 +0000 (0:00:05.196) 0:00:36.162 ********* 2025-07-12 16:06:16.842783 | orchestrator | changed: [localhost] 2025-07-12 16:06:16.842794 | orchestrator | 2025-07-12 16:06:16.842816 | orchestrator | TASK [Create public subnet] **************************************************** 2025-07-12 16:06:16.842828 | orchestrator | Saturday 12 July 2025 16:06:04 +0000 (0:00:06.203) 0:00:42.366 ********* 2025-07-12 16:06:16.842841 | orchestrator | changed: [localhost] 2025-07-12 16:06:16.842853 | orchestrator | 2025-07-12 16:06:16.842866 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-07-12 16:06:16.842878 | orchestrator | Saturday 12 July 2025 16:06:09 +0000 (0:00:04.379) 0:00:46.745 ********* 2025-07-12 16:06:16.842891 | orchestrator | changed: [localhost] 2025-07-12 16:06:16.842902 | orchestrator | 2025-07-12 16:06:16.842913 | orchestrator | TASK [Create manager role] ***************************************************** 2025-07-12 16:06:16.842924 | orchestrator | Saturday 12 July 2025 16:06:13 +0000 (0:00:03.709) 0:00:50.455 ********* 2025-07-12 16:06:16.842935 | orchestrator | ok: [localhost] 2025-07-12 16:06:16.842945 | orchestrator | 2025-07-12 16:06:16.842956 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 16:06:16.842968 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 16:06:16.842979 | orchestrator | 2025-07-12 16:06:16.842990 | orchestrator | 2025-07-12 16:06:16.843001 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 16:06:16.843012 | orchestrator | Saturday 12 July 2025 16:06:16 +0000 (0:00:03.544) 0:00:53.999 ********* 2025-07-12 16:06:16.843045 | orchestrator | =============================================================================== 2025-07-12 16:06:16.843057 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.98s 2025-07-12 16:06:16.843068 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.33s 2025-07-12 16:06:16.843079 | orchestrator | Create volume type local ------------------------------------------------ 7.10s 2025-07-12 16:06:16.843090 | orchestrator | Get volume type local --------------------------------------------------- 6.72s 2025-07-12 16:06:16.843100 | orchestrator | Set public network to default ------------------------------------------- 6.20s 2025-07-12 16:06:16.843111 | orchestrator | Create public network --------------------------------------------------- 5.20s 2025-07-12 16:06:16.843122 | orchestrator | Create public subnet ---------------------------------------------------- 4.38s 2025-07-12 16:06:16.843133 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.71s 2025-07-12 16:06:16.843144 | orchestrator | Create manager role ----------------------------------------------------- 3.54s 2025-07-12 16:06:16.843154 | orchestrator | Gathering Facts --------------------------------------------------------- 1.76s 2025-07-12 16:06:18.938993 | orchestrator | 2025-07-12 16:06:18 | INFO  | It takes a moment until task 90b517de-a522-44bb-8c40-e46c6b4d7064 (image-manager) has been started and output is visible here. 2025-07-12 16:06:56.565593 | orchestrator | 2025-07-12 16:06:22 | INFO  | Processing image 'Cirros 0.6.2' 2025-07-12 16:06:56.565706 | orchestrator | 2025-07-12 16:06:22 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-07-12 16:06:56.565724 | orchestrator | 2025-07-12 16:06:22 | INFO  | Importing image Cirros 0.6.2 2025-07-12 16:06:56.565735 | orchestrator | 2025-07-12 16:06:22 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-07-12 16:06:56.565746 | orchestrator | 2025-07-12 16:06:24 | INFO  | Waiting for import to complete... 2025-07-12 16:06:56.565756 | orchestrator | 2025-07-12 16:06:34 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-07-12 16:06:56.565767 | orchestrator | 2025-07-12 16:06:35 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-07-12 16:06:56.565777 | orchestrator | 2025-07-12 16:06:35 | INFO  | Setting internal_version = 0.6.2 2025-07-12 16:06:56.565787 | orchestrator | 2025-07-12 16:06:35 | INFO  | Setting image_original_user = cirros 2025-07-12 16:06:56.565798 | orchestrator | 2025-07-12 16:06:35 | INFO  | Adding tag os:cirros 2025-07-12 16:06:56.565807 | orchestrator | 2025-07-12 16:06:35 | INFO  | Setting property architecture: x86_64 2025-07-12 16:06:56.565818 | orchestrator | 2025-07-12 16:06:35 | INFO  | Setting property hw_disk_bus: scsi 2025-07-12 16:06:56.565827 | orchestrator | 2025-07-12 16:06:35 | INFO  | Setting property hw_rng_model: virtio 2025-07-12 16:06:56.565837 | orchestrator | 2025-07-12 16:06:36 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-07-12 16:06:56.565848 | orchestrator | 2025-07-12 16:06:36 | INFO  | Setting property hw_watchdog_action: reset 2025-07-12 16:06:56.565857 | orchestrator | 2025-07-12 16:06:36 | INFO  | Setting property hypervisor_type: qemu 2025-07-12 16:06:56.565867 | orchestrator | 2025-07-12 16:06:36 | INFO  | Setting property os_distro: cirros 2025-07-12 16:06:56.565876 | orchestrator | 2025-07-12 16:06:37 | INFO  | Setting property replace_frequency: never 2025-07-12 16:06:56.565886 | orchestrator | 2025-07-12 16:06:37 | INFO  | Setting property uuid_validity: none 2025-07-12 16:06:56.565896 | orchestrator | 2025-07-12 16:06:37 | INFO  | Setting property provided_until: none 2025-07-12 16:06:56.565905 | orchestrator | 2025-07-12 16:06:37 | INFO  | Setting property image_description: Cirros 2025-07-12 16:06:56.565941 | orchestrator | 2025-07-12 16:06:37 | INFO  | Setting property image_name: Cirros 2025-07-12 16:06:56.565966 | orchestrator | 2025-07-12 16:06:38 | INFO  | Setting property internal_version: 0.6.2 2025-07-12 16:06:56.565976 | orchestrator | 2025-07-12 16:06:38 | INFO  | Setting property image_original_user: cirros 2025-07-12 16:06:56.565991 | orchestrator | 2025-07-12 16:06:38 | INFO  | Setting property os_version: 0.6.2 2025-07-12 16:06:56.566002 | orchestrator | 2025-07-12 16:06:38 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-07-12 16:06:56.566117 | orchestrator | 2025-07-12 16:06:39 | INFO  | Setting property image_build_date: 2023-05-30 2025-07-12 16:06:56.566133 | orchestrator | 2025-07-12 16:06:39 | INFO  | Checking status of 'Cirros 0.6.2' 2025-07-12 16:06:56.566143 | orchestrator | 2025-07-12 16:06:39 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-07-12 16:06:56.566152 | orchestrator | 2025-07-12 16:06:39 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-07-12 16:06:56.566162 | orchestrator | 2025-07-12 16:06:39 | INFO  | Processing image 'Cirros 0.6.3' 2025-07-12 16:06:56.566172 | orchestrator | 2025-07-12 16:06:39 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-07-12 16:06:56.566221 | orchestrator | 2025-07-12 16:06:39 | INFO  | Importing image Cirros 0.6.3 2025-07-12 16:06:56.566233 | orchestrator | 2025-07-12 16:06:39 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-07-12 16:06:56.566243 | orchestrator | 2025-07-12 16:06:41 | INFO  | Waiting for import to complete... 2025-07-12 16:06:56.566253 | orchestrator | 2025-07-12 16:06:51 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-07-12 16:06:56.566262 | orchestrator | 2025-07-12 16:06:51 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-07-12 16:06:56.566272 | orchestrator | 2025-07-12 16:06:51 | INFO  | Setting internal_version = 0.6.3 2025-07-12 16:06:56.566300 | orchestrator | 2025-07-12 16:06:51 | INFO  | Setting image_original_user = cirros 2025-07-12 16:06:56.566311 | orchestrator | 2025-07-12 16:06:51 | INFO  | Adding tag os:cirros 2025-07-12 16:06:56.566321 | orchestrator | 2025-07-12 16:06:51 | INFO  | Setting property architecture: x86_64 2025-07-12 16:06:56.566331 | orchestrator | 2025-07-12 16:06:52 | INFO  | Setting property hw_disk_bus: scsi 2025-07-12 16:06:56.566341 | orchestrator | 2025-07-12 16:06:52 | INFO  | Setting property hw_rng_model: virtio 2025-07-12 16:06:56.566350 | orchestrator | 2025-07-12 16:06:52 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-07-12 16:06:56.566360 | orchestrator | 2025-07-12 16:06:52 | INFO  | Setting property hw_watchdog_action: reset 2025-07-12 16:06:56.566370 | orchestrator | 2025-07-12 16:06:52 | INFO  | Setting property hypervisor_type: qemu 2025-07-12 16:06:56.566379 | orchestrator | 2025-07-12 16:06:53 | INFO  | Setting property os_distro: cirros 2025-07-12 16:06:56.566389 | orchestrator | 2025-07-12 16:06:53 | INFO  | Setting property replace_frequency: never 2025-07-12 16:06:56.566399 | orchestrator | 2025-07-12 16:06:53 | INFO  | Setting property uuid_validity: none 2025-07-12 16:06:56.566409 | orchestrator | 2025-07-12 16:06:53 | INFO  | Setting property provided_until: none 2025-07-12 16:06:56.566418 | orchestrator | 2025-07-12 16:06:54 | INFO  | Setting property image_description: Cirros 2025-07-12 16:06:56.566438 | orchestrator | 2025-07-12 16:06:54 | INFO  | Setting property image_name: Cirros 2025-07-12 16:06:56.566448 | orchestrator | 2025-07-12 16:06:54 | INFO  | Setting property internal_version: 0.6.3 2025-07-12 16:06:56.566458 | orchestrator | 2025-07-12 16:06:54 | INFO  | Setting property image_original_user: cirros 2025-07-12 16:06:56.566468 | orchestrator | 2025-07-12 16:06:54 | INFO  | Setting property os_version: 0.6.3 2025-07-12 16:06:56.566478 | orchestrator | 2025-07-12 16:06:55 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-07-12 16:06:56.566488 | orchestrator | 2025-07-12 16:06:55 | INFO  | Setting property image_build_date: 2024-09-26 2025-07-12 16:06:56.566497 | orchestrator | 2025-07-12 16:06:55 | INFO  | Checking status of 'Cirros 0.6.3' 2025-07-12 16:06:56.566507 | orchestrator | 2025-07-12 16:06:55 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-07-12 16:06:56.566517 | orchestrator | 2025-07-12 16:06:55 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-07-12 16:06:56.837902 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-07-12 16:06:58.763782 | orchestrator | 2025-07-12 16:06:58 | INFO  | date: 2025-07-12 2025-07-12 16:06:58.764779 | orchestrator | 2025-07-12 16:06:58 | INFO  | image: octavia-amphora-haproxy-2024.2.20250712.qcow2 2025-07-12 16:06:58.765004 | orchestrator | 2025-07-12 16:06:58 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250712.qcow2 2025-07-12 16:06:58.765787 | orchestrator | 2025-07-12 16:06:58 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250712.qcow2.CHECKSUM 2025-07-12 16:06:58.789081 | orchestrator | 2025-07-12 16:06:58 | INFO  | checksum: c95855ae58dddb977df0d8e11b851fc66dd0abac9e608812e6020c0a95df8f26 2025-07-12 16:06:58.869802 | orchestrator | 2025-07-12 16:06:58 | INFO  | It takes a moment until task 17db74ac-6f04-442f-8785-5a5c967c35c5 (image-manager) has been started and output is visible here. 2025-07-12 16:07:59.696715 | orchestrator | /usr/local/lib/python3.13/site-packages/openstack_image_manager/__init__.py:5: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-07-12 16:07:59.696877 | orchestrator | from pkg_resources import get_distribution, DistributionNotFound 2025-07-12 16:07:59.696911 | orchestrator | 2025-07-12 16:07:01 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-07-12' 2025-07-12 16:07:59.696935 | orchestrator | 2025-07-12 16:07:01 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250712.qcow2: 200 2025-07-12 16:07:59.696956 | orchestrator | 2025-07-12 16:07:01 | INFO  | Importing image OpenStack Octavia Amphora 2025-07-12 2025-07-12 16:07:59.696979 | orchestrator | 2025-07-12 16:07:01 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250712.qcow2 2025-07-12 16:07:59.697001 | orchestrator | 2025-07-12 16:07:02 | INFO  | Waiting for image to leave queued state... 2025-07-12 16:07:59.697020 | orchestrator | 2025-07-12 16:07:04 | INFO  | Waiting for import to complete... 2025-07-12 16:07:59.697037 | orchestrator | 2025-07-12 16:07:14 | INFO  | Waiting for import to complete... 2025-07-12 16:07:59.697088 | orchestrator | 2025-07-12 16:07:24 | INFO  | Waiting for import to complete... 2025-07-12 16:07:59.697107 | orchestrator | 2025-07-12 16:07:34 | INFO  | Waiting for import to complete... 2025-07-12 16:07:59.697210 | orchestrator | 2025-07-12 16:07:45 | INFO  | Waiting for import to complete... 2025-07-12 16:07:59.697226 | orchestrator | 2025-07-12 16:07:55 | INFO  | Import of 'OpenStack Octavia Amphora 2025-07-12' successfully completed, reloading images 2025-07-12 16:07:59.697240 | orchestrator | 2025-07-12 16:07:55 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-07-12' 2025-07-12 16:07:59.697254 | orchestrator | 2025-07-12 16:07:55 | INFO  | Setting internal_version = 2025-07-12 2025-07-12 16:07:59.697266 | orchestrator | 2025-07-12 16:07:55 | INFO  | Setting image_original_user = ubuntu 2025-07-12 16:07:59.697278 | orchestrator | 2025-07-12 16:07:55 | INFO  | Adding tag amphora 2025-07-12 16:07:59.697291 | orchestrator | 2025-07-12 16:07:55 | INFO  | Adding tag os:ubuntu 2025-07-12 16:07:59.697303 | orchestrator | 2025-07-12 16:07:55 | INFO  | Setting property architecture: x86_64 2025-07-12 16:07:59.697315 | orchestrator | 2025-07-12 16:07:56 | INFO  | Setting property hw_disk_bus: scsi 2025-07-12 16:07:59.697344 | orchestrator | 2025-07-12 16:07:56 | INFO  | Setting property hw_rng_model: virtio 2025-07-12 16:07:59.697367 | orchestrator | 2025-07-12 16:07:56 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-07-12 16:07:59.697380 | orchestrator | 2025-07-12 16:07:56 | INFO  | Setting property hw_watchdog_action: reset 2025-07-12 16:07:59.697392 | orchestrator | 2025-07-12 16:07:56 | INFO  | Setting property hypervisor_type: qemu 2025-07-12 16:07:59.697405 | orchestrator | 2025-07-12 16:07:57 | INFO  | Setting property os_distro: ubuntu 2025-07-12 16:07:59.697429 | orchestrator | 2025-07-12 16:07:57 | INFO  | Setting property replace_frequency: quarterly 2025-07-12 16:07:59.697442 | orchestrator | 2025-07-12 16:07:57 | INFO  | Setting property uuid_validity: last-1 2025-07-12 16:07:59.697455 | orchestrator | 2025-07-12 16:07:57 | INFO  | Setting property provided_until: none 2025-07-12 16:07:59.697467 | orchestrator | 2025-07-12 16:07:57 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-07-12 16:07:59.697480 | orchestrator | 2025-07-12 16:07:58 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-07-12 16:07:59.697493 | orchestrator | 2025-07-12 16:07:58 | INFO  | Setting property internal_version: 2025-07-12 2025-07-12 16:07:59.697505 | orchestrator | 2025-07-12 16:07:58 | INFO  | Setting property image_original_user: ubuntu 2025-07-12 16:07:59.697516 | orchestrator | 2025-07-12 16:07:58 | INFO  | Setting property os_version: 2025-07-12 2025-07-12 16:07:59.697530 | orchestrator | 2025-07-12 16:07:58 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250712.qcow2 2025-07-12 16:07:59.697566 | orchestrator | 2025-07-12 16:07:59 | INFO  | Setting property image_build_date: 2025-07-12 2025-07-12 16:07:59.697578 | orchestrator | 2025-07-12 16:07:59 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-07-12' 2025-07-12 16:07:59.697589 | orchestrator | 2025-07-12 16:07:59 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-07-12' 2025-07-12 16:07:59.697599 | orchestrator | 2025-07-12 16:07:59 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-07-12 16:07:59.697610 | orchestrator | 2025-07-12 16:07:59 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-07-12 16:07:59.697633 | orchestrator | 2025-07-12 16:07:59 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-07-12 16:07:59.697644 | orchestrator | 2025-07-12 16:07:59 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-07-12 16:08:00.443987 | orchestrator | ok: Runtime: 0:03:05.830063 2025-07-12 16:08:00.501882 | 2025-07-12 16:08:00.502060 | TASK [Run checks] 2025-07-12 16:08:01.192509 | orchestrator | + set -e 2025-07-12 16:08:01.192701 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-12 16:08:01.192725 | orchestrator | ++ export INTERACTIVE=false 2025-07-12 16:08:01.192747 | orchestrator | ++ INTERACTIVE=false 2025-07-12 16:08:01.192761 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-12 16:08:01.192774 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-12 16:08:01.192788 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-07-12 16:08:01.193788 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-07-12 16:08:01.200322 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-07-12 16:08:01.200357 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-07-12 16:08:01.200376 | orchestrator | 2025-07-12 16:08:01.200391 | orchestrator | # CHECK 2025-07-12 16:08:01.200404 | orchestrator | 2025-07-12 16:08:01.200417 | orchestrator | + echo 2025-07-12 16:08:01.200441 | orchestrator | + echo '# CHECK' 2025-07-12 16:08:01.200455 | orchestrator | + echo 2025-07-12 16:08:01.200475 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-07-12 16:08:01.201446 | orchestrator | ++ semver 9.2.0 5.0.0 2025-07-12 16:08:01.265643 | orchestrator | 2025-07-12 16:08:01.265687 | orchestrator | ## Containers @ testbed-manager 2025-07-12 16:08:01.265700 | orchestrator | 2025-07-12 16:08:01.265713 | orchestrator | + [[ 1 -eq -1 ]] 2025-07-12 16:08:01.265725 | orchestrator | + echo 2025-07-12 16:08:01.265736 | orchestrator | + echo '## Containers @ testbed-manager' 2025-07-12 16:08:01.265748 | orchestrator | + echo 2025-07-12 16:08:01.265759 | orchestrator | + osism container testbed-manager ps 2025-07-12 16:08:03.664333 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-07-12 16:08:03.664466 | orchestrator | 2673ff7c94d8 registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_blackbox_exporter 2025-07-12 16:08:03.664491 | orchestrator | de2bfae3afc5 registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_alertmanager 2025-07-12 16:08:03.664512 | orchestrator | dffc08cf81e6 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-07-12 16:08:03.664524 | orchestrator | b09d592b54d1 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-07-12 16:08:03.664536 | orchestrator | 315a8a4d17cf registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_server 2025-07-12 16:08:03.664548 | orchestrator | 6a20c71da52f registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 17 minutes ago Up 16 minutes cephclient 2025-07-12 16:08:03.664564 | orchestrator | d48c528f78aa registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2025-07-12 16:08:03.664576 | orchestrator | b959e16f4b1f registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-07-12 16:08:03.664588 | orchestrator | 30fa3aee2139 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 29 minutes ago Up 29 minutes (healthy) 80/tcp phpmyadmin 2025-07-12 16:08:03.664626 | orchestrator | 9b1e868ac0df registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-07-12 16:08:03.664638 | orchestrator | f3a52e6714e1 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 30 minutes ago Up 30 minutes openstackclient 2025-07-12 16:08:03.664650 | orchestrator | d64f54fac3a3 registry.osism.tech/osism/homer:v25.05.2 "/bin/sh /entrypoint…" 31 minutes ago Up 30 minutes (healthy) 8080/tcp homer 2025-07-12 16:08:03.664662 | orchestrator | a9e1a635ab06 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 54 minutes ago Up 53 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-07-12 16:08:03.664679 | orchestrator | ca3ac52b8108 registry.osism.tech/osism/inventory-reconciler:0.20250711.0 "/sbin/tini -- /entr…" 58 minutes ago Up 38 minutes (healthy) manager-inventory_reconciler-1 2025-07-12 16:08:03.664710 | orchestrator | 530c127078c1 registry.osism.tech/osism/osism-ansible:0.20250711.0 "/entrypoint.sh osis…" 58 minutes ago Up 38 minutes (healthy) osism-ansible 2025-07-12 16:08:03.664723 | orchestrator | 05338f366c6a registry.osism.tech/osism/ceph-ansible:0.20250711.0 "/entrypoint.sh osis…" 58 minutes ago Up 38 minutes (healthy) ceph-ansible 2025-07-12 16:08:03.664734 | orchestrator | 564335571ce7 registry.osism.tech/osism/kolla-ansible:0.20250711.0 "/entrypoint.sh osis…" 58 minutes ago Up 38 minutes (healthy) kolla-ansible 2025-07-12 16:08:03.664746 | orchestrator | 1cfe2dc1160a registry.osism.tech/osism/osism-kubernetes:0.20250711.0 "/entrypoint.sh osis…" 58 minutes ago Up 38 minutes (healthy) osism-kubernetes 2025-07-12 16:08:03.664757 | orchestrator | 6ba1d3656e27 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" 58 minutes ago Up 39 minutes (healthy) 8000/tcp manager-ara-server-1 2025-07-12 16:08:03.664768 | orchestrator | 1df75bf69335 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 58 minutes ago Up 39 minutes (healthy) manager-flower-1 2025-07-12 16:08:03.664779 | orchestrator | ab9635ae0564 registry.osism.tech/dockerhub/library/mariadb:11.8.2 "docker-entrypoint.s…" 58 minutes ago Up 39 minutes (healthy) 3306/tcp manager-mariadb-1 2025-07-12 16:08:03.664790 | orchestrator | d50a1e902f42 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 58 minutes ago Up 39 minutes (healthy) manager-beat-1 2025-07-12 16:08:03.664802 | orchestrator | 1cf58a241a16 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 58 minutes ago Up 39 minutes (healthy) manager-openstack-1 2025-07-12 16:08:03.664821 | orchestrator | daeeab656358 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- sleep…" 58 minutes ago Up 39 minutes (healthy) osismclient 2025-07-12 16:08:03.664832 | orchestrator | bdac47029f2b registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 58 minutes ago Up 39 minutes (healthy) manager-listener-1 2025-07-12 16:08:03.664844 | orchestrator | ec45b861fff0 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 58 minutes ago Up 39 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-07-12 16:08:03.664855 | orchestrator | be329551bbbe registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" 58 minutes ago Up 39 minutes (healthy) 6379/tcp manager-redis-1 2025-07-12 16:08:03.664866 | orchestrator | c4b43a1fba2a registry.osism.tech/dockerhub/library/traefik:v3.4.3 "/entrypoint.sh trae…" 59 minutes ago Up 59 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-07-12 16:08:03.944597 | orchestrator | 2025-07-12 16:08:03.944712 | orchestrator | ## Images @ testbed-manager 2025-07-12 16:08:03.944729 | orchestrator | 2025-07-12 16:08:03.944741 | orchestrator | + echo 2025-07-12 16:08:03.944754 | orchestrator | + echo '## Images @ testbed-manager' 2025-07-12 16:08:03.944767 | orchestrator | + echo 2025-07-12 16:08:03.944778 | orchestrator | + osism container testbed-manager images 2025-07-12 16:08:06.086945 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-07-12 16:08:06.087056 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20250711.0 fcbac8373342 6 hours ago 571MB 2025-07-12 16:08:06.087074 | orchestrator | registry.osism.tech/osism/homer v25.05.2 d2fcb41febbc 13 hours ago 11.5MB 2025-07-12 16:08:06.087085 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 751f5a3be689 13 hours ago 234MB 2025-07-12 16:08:06.087096 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 19 hours ago 628MB 2025-07-12 16:08:06.087149 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 19 hours ago 746MB 2025-07-12 16:08:06.087161 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 19 hours ago 318MB 2025-07-12 16:08:06.087171 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20250711 cb02c47a5187 19 hours ago 891MB 2025-07-12 16:08:06.087180 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20250711 0ac8facfe451 19 hours ago 360MB 2025-07-12 16:08:06.087190 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 19 hours ago 410MB 2025-07-12 16:08:06.087200 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20250711 6c4eef6335f5 19 hours ago 456MB 2025-07-12 16:08:06.087210 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 19 hours ago 358MB 2025-07-12 16:08:06.087230 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20250711.0 7b0f9e78b4e4 20 hours ago 575MB 2025-07-12 16:08:06.087240 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20250711.0 f677f8f8094b 20 hours ago 535MB 2025-07-12 16:08:06.087270 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20250711.0 8fcfa643b744 21 hours ago 308MB 2025-07-12 16:08:06.087281 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20250711.0 267f92fc46f6 21 hours ago 1.21GB 2025-07-12 16:08:06.087291 | orchestrator | registry.osism.tech/osism/osism 0.20250709.0 ccd699d89870 2 days ago 310MB 2025-07-12 16:08:06.087301 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.5-alpine 555db38b5b92 5 days ago 41.4MB 2025-07-12 16:08:06.087311 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.3 4113453efcb3 2 weeks ago 226MB 2025-07-12 16:08:06.087320 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.2 7fb85a4198e9 4 weeks ago 329MB 2025-07-12 16:08:06.087330 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 2 months ago 453MB 2025-07-12 16:08:06.087340 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 5 months ago 571MB 2025-07-12 16:08:06.087349 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 10 months ago 300MB 2025-07-12 16:08:06.087359 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 13 months ago 146MB 2025-07-12 16:08:06.339493 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-07-12 16:08:06.339734 | orchestrator | ++ semver 9.2.0 5.0.0 2025-07-12 16:08:06.380433 | orchestrator | 2025-07-12 16:08:06.380526 | orchestrator | ## Containers @ testbed-node-0 2025-07-12 16:08:06.380540 | orchestrator | 2025-07-12 16:08:06.380553 | orchestrator | + [[ 1 -eq -1 ]] 2025-07-12 16:08:06.380565 | orchestrator | + echo 2025-07-12 16:08:06.380577 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-07-12 16:08:06.380590 | orchestrator | + echo 2025-07-12 16:08:06.380600 | orchestrator | + osism container testbed-node-0 ps 2025-07-12 16:08:08.580439 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-07-12 16:08:08.580559 | orchestrator | 5f0086c1edf0 registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-07-12 16:08:08.580811 | orchestrator | d806f03f5978 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-07-12 16:08:08.580831 | orchestrator | a09762537389 registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-07-12 16:08:08.580843 | orchestrator | 2f7dcc373a1a registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-07-12 16:08:08.580854 | orchestrator | 269d7775ad03 registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-07-12 16:08:08.580866 | orchestrator | f05bf64c7a61 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-07-12 16:08:08.581331 | orchestrator | aafc7a26e2f3 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-07-12 16:08:08.581373 | orchestrator | 6bff894474eb registry.osism.tech/kolla/release/grafana:12.0.2.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-07-12 16:08:08.581385 | orchestrator | 796c51e30c46 registry.osism.tech/kolla/release/placement-api:12.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-07-12 16:08:08.581423 | orchestrator | 665502ac8d9e registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-07-12 16:08:08.581435 | orchestrator | 189b01378285 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-07-12 16:08:08.581446 | orchestrator | c5963b0ace4a registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2025-07-12 16:08:08.581457 | orchestrator | 603cfe409299 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-07-12 16:08:08.581467 | orchestrator | 6f1abf04eb9f registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2025-07-12 16:08:08.581478 | orchestrator | 941daaa5feb1 registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-07-12 16:08:08.581489 | orchestrator | 270cec02c38c registry.osism.tech/kolla/release/designate-central:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-07-12 16:08:08.582109 | orchestrator | 66b706850598 registry.osism.tech/kolla/release/designate-api:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-07-12 16:08:08.582236 | orchestrator | b07a190d374a registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-07-12 16:08:08.582252 | orchestrator | 4c44aadc8933 registry.osism.tech/kolla/release/nova-api:30.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2025-07-12 16:08:08.582265 | orchestrator | 7a664dacc5bd registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-07-12 16:08:08.582276 | orchestrator | ebc8586ca609 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-07-12 16:08:08.582287 | orchestrator | d72f983ea879 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-07-12 16:08:08.582298 | orchestrator | f2a7c4871bee registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-07-12 16:08:08.582309 | orchestrator | 834a65353ce9 registry.osism.tech/kolla/release/glance-api:29.0.1.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2025-07-12 16:08:08.582320 | orchestrator | d2f4fae35f54 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2025-07-12 16:08:08.582333 | orchestrator | af044240725f registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2025-07-12 16:08:08.582362 | orchestrator | 792b1bbb757d registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-07-12 16:08:08.582394 | orchestrator | d77218ced199 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-07-12 16:08:08.582405 | orchestrator | 9272568f4f2e registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-07-12 16:08:08.582417 | orchestrator | c7f4d72bcd73 registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-07-12 16:08:08.582433 | orchestrator | 623d80e4b112 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-07-12 16:08:08.582445 | orchestrator | 4d553817c68e registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-0 2025-07-12 16:08:08.582456 | orchestrator | f161554bca4e registry.osism.tech/kolla/release/keystone:26.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2025-07-12 16:08:08.582484 | orchestrator | e25bf071e537 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-07-12 16:08:08.582496 | orchestrator | 9b200ff656ab registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-07-12 16:08:08.582507 | orchestrator | 73b250c9e817 registry.osism.tech/kolla/release/horizon:25.1.1.20250711 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2025-07-12 16:08:08.582518 | orchestrator | 0dd30abc3ae2 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2025-07-12 16:08:08.582534 | orchestrator | 8b33f582c54b registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-07-12 16:08:08.582545 | orchestrator | 4ccb25ca6f03 registry.osism.tech/kolla/release/opensearch:2.19.2.20250711 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-07-12 16:08:08.582556 | orchestrator | 47b2241efd88 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-0 2025-07-12 16:08:08.582567 | orchestrator | bfbaad56c8c4 registry.osism.tech/kolla/release/keepalived:2.2.7.20250711 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-07-12 16:08:08.582578 | orchestrator | acfee5a0ac76 registry.osism.tech/kolla/release/proxysql:2.7.3.20250711 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2025-07-12 16:08:08.582589 | orchestrator | d77c76b9f32c registry.osism.tech/kolla/release/haproxy:2.6.12.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-07-12 16:08:08.582600 | orchestrator | 358df837d581 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_northd 2025-07-12 16:08:08.582611 | orchestrator | ea3cce269bff registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_sb_db 2025-07-12 16:08:08.582629 | orchestrator | 89c40a5bdc1d registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_nb_db 2025-07-12 16:08:08.582640 | orchestrator | 9799abf00c6b registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2025-07-12 16:08:08.582651 | orchestrator | 65637d68c52d registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-0 2025-07-12 16:08:08.582662 | orchestrator | e032b109526f registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-07-12 16:08:08.582673 | orchestrator | 7d03304a4255 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711 "dumb-init --single-…" 28 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2025-07-12 16:08:08.582684 | orchestrator | 3f6dc05d1fb2 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2025-07-12 16:08:08.582694 | orchestrator | 2efd0ef6c61f registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis_sentinel 2025-07-12 16:08:08.582705 | orchestrator | b74b6c046ea1 registry.osism.tech/kolla/release/redis:7.0.15.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis 2025-07-12 16:08:08.582716 | orchestrator | 5b51d37186b1 registry.osism.tech/kolla/release/memcached:1.6.18.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) memcached 2025-07-12 16:08:08.582738 | orchestrator | 7dccee28a0ac registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-07-12 16:08:08.582749 | orchestrator | 0b80cd35cd92 registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-07-12 16:08:08.582760 | orchestrator | 1ea7b6e266fa registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-07-12 16:08:08.849291 | orchestrator | 2025-07-12 16:08:08.849395 | orchestrator | ## Images @ testbed-node-0 2025-07-12 16:08:08.849410 | orchestrator | 2025-07-12 16:08:08.849423 | orchestrator | + echo 2025-07-12 16:08:08.849435 | orchestrator | + echo '## Images @ testbed-node-0' 2025-07-12 16:08:08.849447 | orchestrator | + echo 2025-07-12 16:08:08.849459 | orchestrator | + osism container testbed-node-0 images 2025-07-12 16:08:11.075263 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-07-12 16:08:11.075372 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 19 hours ago 628MB 2025-07-12 16:08:11.075400 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250711 c7f6abdb2516 19 hours ago 329MB 2025-07-12 16:08:11.075413 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250711 0a9fd950fe86 19 hours ago 326MB 2025-07-12 16:08:11.075424 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250711 d8c44fac73c2 19 hours ago 1.59GB 2025-07-12 16:08:11.075434 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250711 db87020f3b90 19 hours ago 1.55GB 2025-07-12 16:08:11.075445 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250711 4c6eaa052643 19 hours ago 417MB 2025-07-12 16:08:11.075483 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250711 cd87896ace76 19 hours ago 318MB 2025-07-12 16:08:11.075495 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250711 4ce47f209c9b 19 hours ago 375MB 2025-07-12 16:08:11.075506 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 19 hours ago 746MB 2025-07-12 16:08:11.075516 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.2.20250711 f4164dfd1b02 19 hours ago 1.01GB 2025-07-12 16:08:11.075538 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 19 hours ago 318MB 2025-07-12 16:08:11.075549 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250711 15f29551e6ce 19 hours ago 361MB 2025-07-12 16:08:11.075560 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250711 ea9ea8f197d8 19 hours ago 361MB 2025-07-12 16:08:11.075572 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250711 d4ae4a297d3b 19 hours ago 1.21GB 2025-07-12 16:08:11.075583 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250711 142dafde994c 19 hours ago 353MB 2025-07-12 16:08:11.075594 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 19 hours ago 410MB 2025-07-12 16:08:11.075604 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250711 62e13ec7689a 19 hours ago 344MB 2025-07-12 16:08:11.075615 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 19 hours ago 358MB 2025-07-12 16:08:11.075626 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250711 534f393a19e2 19 hours ago 324MB 2025-07-12 16:08:11.075636 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250711 834c4c2dcd78 19 hours ago 351MB 2025-07-12 16:08:11.075647 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250711 d7d5c3586026 19 hours ago 324MB 2025-07-12 16:08:11.075658 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250711 5892b19e1064 19 hours ago 590MB 2025-07-12 16:08:11.075668 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250711 28654474dfe5 19 hours ago 946MB 2025-07-12 16:08:11.075679 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250711 65e36d1176bd 19 hours ago 947MB 2025-07-12 16:08:11.075690 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250711 58ad45688234 19 hours ago 947MB 2025-07-12 16:08:11.075700 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250711 affa47a97549 19 hours ago 946MB 2025-07-12 16:08:11.075711 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.0.20250711 05a4552273f6 19 hours ago 1.04GB 2025-07-12 16:08:11.075722 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.0.20250711 41f8c34132c7 19 hours ago 1.04GB 2025-07-12 16:08:11.075732 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250711 06deffb77b4f 19 hours ago 1.1GB 2025-07-12 16:08:11.075743 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250711 02867223fb33 19 hours ago 1.1GB 2025-07-12 16:08:11.075757 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250711 6146c08f2b76 19 hours ago 1.12GB 2025-07-12 16:08:11.075789 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250711 6d529ee19c1c 19 hours ago 1.1GB 2025-07-12 16:08:11.075813 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250711 b1ed239b634f 19 hours ago 1.12GB 2025-07-12 16:08:11.075826 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250711 65a4d0afbb1c 19 hours ago 1.15GB 2025-07-12 16:08:11.075840 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250711 2b6bd346ad18 19 hours ago 1.04GB 2025-07-12 16:08:11.075859 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250711 1b7dd2682590 19 hours ago 1.06GB 2025-07-12 16:08:11.075872 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250711 e475391ce44d 19 hours ago 1.06GB 2025-07-12 16:08:11.075885 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250711 09290580fa03 19 hours ago 1.06GB 2025-07-12 16:08:11.075915 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.2.1.20250711 a09a8be1b711 19 hours ago 1.41GB 2025-07-12 16:08:11.075928 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.2.1.20250711 c0d28e8febb9 19 hours ago 1.41GB 2025-07-12 16:08:11.075952 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250711 e0ad0ae52bef 19 hours ago 1.29GB 2025-07-12 16:08:11.075965 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250711 b395cfe7f13f 19 hours ago 1.42GB 2025-07-12 16:08:11.075978 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250711 ee83c124eb76 19 hours ago 1.29GB 2025-07-12 16:08:11.075991 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250711 44e25b162470 19 hours ago 1.29GB 2025-07-12 16:08:11.076003 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250711 71f47d2b2def 19 hours ago 1.2GB 2025-07-12 16:08:11.076017 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250711 13b61cb4a5d2 19 hours ago 1.31GB 2025-07-12 16:08:11.076030 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250711 a030b794eaa9 19 hours ago 1.05GB 2025-07-12 16:08:11.076043 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250711 2d0954c30848 19 hours ago 1.05GB 2025-07-12 16:08:11.076056 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250711 f7fa0bcabe47 19 hours ago 1.05GB 2025-07-12 16:08:11.076068 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250711 4de726ebba0e 19 hours ago 1.06GB 2025-07-12 16:08:11.076081 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250711 a14c6ace0b24 19 hours ago 1.06GB 2025-07-12 16:08:11.076094 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250711 2a2b32cdb83f 19 hours ago 1.05GB 2025-07-12 16:08:11.076107 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20250711 f2e37439c6b7 19 hours ago 1.11GB 2025-07-12 16:08:11.076136 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20250711 b3d19c53d4de 19 hours ago 1.11GB 2025-07-12 16:08:11.076147 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250711 53889b0cb73d 19 hours ago 1.11GB 2025-07-12 16:08:11.076158 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250711 caf4f12b4799 19 hours ago 1.13GB 2025-07-12 16:08:11.076169 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250711 3ba6da1abaea 19 hours ago 1.11GB 2025-07-12 16:08:11.076180 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.1.20250711 8377b7d24f73 19 hours ago 1.24GB 2025-07-12 16:08:11.076191 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20250711 c26d685bbc69 19 hours ago 1.04GB 2025-07-12 16:08:11.076209 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20250711 55a7448b63ad 19 hours ago 1.04GB 2025-07-12 16:08:11.076220 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20250711 b8a4d60cb725 19 hours ago 1.04GB 2025-07-12 16:08:11.076231 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20250711 c0822bfcb81c 19 hours ago 1.04GB 2025-07-12 16:08:11.076242 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 2 months ago 1.27GB 2025-07-12 16:08:11.325030 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-07-12 16:08:11.325168 | orchestrator | ++ semver 9.2.0 5.0.0 2025-07-12 16:08:11.385490 | orchestrator | 2025-07-12 16:08:11.385600 | orchestrator | ## Containers @ testbed-node-1 2025-07-12 16:08:11.385615 | orchestrator | 2025-07-12 16:08:11.385627 | orchestrator | + [[ 1 -eq -1 ]] 2025-07-12 16:08:11.385638 | orchestrator | + echo 2025-07-12 16:08:11.385651 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-07-12 16:08:11.385663 | orchestrator | + echo 2025-07-12 16:08:11.385675 | orchestrator | + osism container testbed-node-1 ps 2025-07-12 16:08:13.550823 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-07-12 16:08:13.550934 | orchestrator | 07501a9ad6b3 registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-07-12 16:08:13.550951 | orchestrator | e657d5594047 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-07-12 16:08:13.550963 | orchestrator | 79218f352f9e registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-07-12 16:08:13.550975 | orchestrator | b4bddcaca49b registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-07-12 16:08:13.551033 | orchestrator | 7ea7c9144a63 registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-07-12 16:08:13.551047 | orchestrator | c23951ebb487 registry.osism.tech/kolla/release/grafana:12.0.2.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-07-12 16:08:13.551058 | orchestrator | 913092c27640 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-07-12 16:08:13.551069 | orchestrator | b25136390412 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2025-07-12 16:08:13.551080 | orchestrator | 2af005d2ec83 registry.osism.tech/kolla/release/placement-api:12.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-07-12 16:08:13.551091 | orchestrator | 23271fdf2673 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-07-12 16:08:13.551102 | orchestrator | 2f82f229c096 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-07-12 16:08:13.551155 | orchestrator | 3a925b46cf5c registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2025-07-12 16:08:13.551190 | orchestrator | e01915233f1b registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) neutron_server 2025-07-12 16:08:13.551201 | orchestrator | 5ffc7af4bad0 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) designate_mdns 2025-07-12 16:08:13.551212 | orchestrator | 1e677dad8c1f registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2025-07-12 16:08:13.551223 | orchestrator | 41a57a77e261 registry.osism.tech/kolla/release/designate-central:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-07-12 16:08:13.551234 | orchestrator | 98a1637eafa8 registry.osism.tech/kolla/release/designate-api:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-07-12 16:08:13.551245 | orchestrator | c97ab49dce74 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-07-12 16:08:13.551277 | orchestrator | f75ed58cd95c registry.osism.tech/kolla/release/nova-api:30.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2025-07-12 16:08:13.551306 | orchestrator | 3962114de3e8 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-07-12 16:08:13.551318 | orchestrator | d4a7f5aa000b registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-07-12 16:08:13.551329 | orchestrator | 4efd46343878 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-07-12 16:08:13.551340 | orchestrator | ac39c49bc4e9 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-07-12 16:08:13.551351 | orchestrator | 54707ce1cbe6 registry.osism.tech/kolla/release/glance-api:29.0.1.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2025-07-12 16:08:13.551362 | orchestrator | 91ba3179977e registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711 "dumb-init --single-…" 14 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2025-07-12 16:08:13.551376 | orchestrator | 9e65993f8b23 registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2025-07-12 16:08:13.551387 | orchestrator | 226da28a604c registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-07-12 16:08:13.551398 | orchestrator | b856a84d6aac registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-07-12 16:08:13.551409 | orchestrator | f71bb2365da7 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-07-12 16:08:13.551419 | orchestrator | 2610710fd785 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-07-12 16:08:13.551438 | orchestrator | 94f9ab1be9f3 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-07-12 16:08:13.551559 | orchestrator | c10e42652ad8 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-1 2025-07-12 16:08:13.551574 | orchestrator | 1c61416ba7ce registry.osism.tech/kolla/release/keystone:26.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2025-07-12 16:08:13.551586 | orchestrator | 7ac2d6535a8e registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-07-12 16:08:13.551597 | orchestrator | 17d8948ae6b2 registry.osism.tech/kolla/release/horizon:25.1.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2025-07-12 16:08:13.551607 | orchestrator | 3fde6f710103 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-07-12 16:08:13.551619 | orchestrator | 0cc068de5c05 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2025-07-12 16:08:13.551629 | orchestrator | 9ab170c0c391 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-07-12 16:08:13.551640 | orchestrator | 0e241232741c registry.osism.tech/kolla/release/opensearch:2.19.2.20250711 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2025-07-12 16:08:13.551651 | orchestrator | 68374a86cc37 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-1 2025-07-12 16:08:13.551662 | orchestrator | fb56e99ff9b3 registry.osism.tech/kolla/release/keepalived:2.2.7.20250711 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-07-12 16:08:13.551680 | orchestrator | f8d497d3bd7c registry.osism.tech/kolla/release/proxysql:2.7.3.20250711 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2025-07-12 16:08:13.551691 | orchestrator | 2967595b6dcc registry.osism.tech/kolla/release/haproxy:2.6.12.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-07-12 16:08:13.551702 | orchestrator | 5c6023a27ca8 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_northd 2025-07-12 16:08:13.551713 | orchestrator | 687d8c7bf6c9 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_sb_db 2025-07-12 16:08:13.551723 | orchestrator | c81882bfdfdd registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_nb_db 2025-07-12 16:08:13.551734 | orchestrator | 3e774178a9de registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2025-07-12 16:08:13.551745 | orchestrator | c013e2918555 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2025-07-12 16:08:13.551756 | orchestrator | cca4958387a8 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-1 2025-07-12 16:08:13.551773 | orchestrator | 3229f3bbd4bf registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711 "dumb-init --single-…" 28 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2025-07-12 16:08:13.551784 | orchestrator | e552edd588f3 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2025-07-12 16:08:13.551794 | orchestrator | cb5618ec6098 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis_sentinel 2025-07-12 16:08:13.551812 | orchestrator | cd93fbf2aa53 registry.osism.tech/kolla/release/redis:7.0.15.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis 2025-07-12 16:08:13.551823 | orchestrator | 6ac643313b01 registry.osism.tech/kolla/release/memcached:1.6.18.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) memcached 2025-07-12 16:08:13.551834 | orchestrator | c51307396ede registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-07-12 16:08:13.551844 | orchestrator | ab2624a15ac0 registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-07-12 16:08:13.551855 | orchestrator | 810c8cc7f619 registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-07-12 16:08:13.826598 | orchestrator | 2025-07-12 16:08:13.826696 | orchestrator | ## Images @ testbed-node-1 2025-07-12 16:08:13.826711 | orchestrator | 2025-07-12 16:08:13.826724 | orchestrator | + echo 2025-07-12 16:08:13.826737 | orchestrator | + echo '## Images @ testbed-node-1' 2025-07-12 16:08:13.826750 | orchestrator | + echo 2025-07-12 16:08:13.826762 | orchestrator | + osism container testbed-node-1 images 2025-07-12 16:08:16.098256 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-07-12 16:08:16.098372 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 19 hours ago 628MB 2025-07-12 16:08:16.098388 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250711 c7f6abdb2516 19 hours ago 329MB 2025-07-12 16:08:16.098400 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250711 0a9fd950fe86 19 hours ago 326MB 2025-07-12 16:08:16.098411 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250711 d8c44fac73c2 19 hours ago 1.59GB 2025-07-12 16:08:16.098422 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250711 db87020f3b90 19 hours ago 1.55GB 2025-07-12 16:08:16.098432 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250711 4c6eaa052643 19 hours ago 417MB 2025-07-12 16:08:16.098443 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250711 cd87896ace76 19 hours ago 318MB 2025-07-12 16:08:16.098454 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250711 4ce47f209c9b 19 hours ago 375MB 2025-07-12 16:08:16.098465 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 19 hours ago 746MB 2025-07-12 16:08:16.098476 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.2.20250711 f4164dfd1b02 19 hours ago 1.01GB 2025-07-12 16:08:16.098488 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 19 hours ago 318MB 2025-07-12 16:08:16.098533 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250711 15f29551e6ce 19 hours ago 361MB 2025-07-12 16:08:16.098546 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250711 ea9ea8f197d8 19 hours ago 361MB 2025-07-12 16:08:16.098556 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250711 d4ae4a297d3b 19 hours ago 1.21GB 2025-07-12 16:08:16.098584 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250711 142dafde994c 19 hours ago 353MB 2025-07-12 16:08:16.098596 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 19 hours ago 410MB 2025-07-12 16:08:16.098607 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250711 62e13ec7689a 19 hours ago 344MB 2025-07-12 16:08:16.098618 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 19 hours ago 358MB 2025-07-12 16:08:16.098628 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250711 534f393a19e2 19 hours ago 324MB 2025-07-12 16:08:16.098639 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250711 834c4c2dcd78 19 hours ago 351MB 2025-07-12 16:08:16.098650 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250711 d7d5c3586026 19 hours ago 324MB 2025-07-12 16:08:16.098661 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250711 5892b19e1064 19 hours ago 590MB 2025-07-12 16:08:16.098672 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250711 65e36d1176bd 19 hours ago 947MB 2025-07-12 16:08:16.098683 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250711 28654474dfe5 19 hours ago 946MB 2025-07-12 16:08:16.098693 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250711 58ad45688234 19 hours ago 947MB 2025-07-12 16:08:16.098709 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250711 affa47a97549 19 hours ago 946MB 2025-07-12 16:08:16.098720 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250711 06deffb77b4f 19 hours ago 1.1GB 2025-07-12 16:08:16.098731 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250711 02867223fb33 19 hours ago 1.1GB 2025-07-12 16:08:16.098742 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250711 6146c08f2b76 19 hours ago 1.12GB 2025-07-12 16:08:16.098752 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250711 6d529ee19c1c 19 hours ago 1.1GB 2025-07-12 16:08:16.098763 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250711 b1ed239b634f 19 hours ago 1.12GB 2025-07-12 16:08:16.098794 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250711 65a4d0afbb1c 19 hours ago 1.15GB 2025-07-12 16:08:16.098806 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250711 2b6bd346ad18 19 hours ago 1.04GB 2025-07-12 16:08:16.098817 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250711 1b7dd2682590 19 hours ago 1.06GB 2025-07-12 16:08:16.098828 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250711 e475391ce44d 19 hours ago 1.06GB 2025-07-12 16:08:16.098839 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250711 09290580fa03 19 hours ago 1.06GB 2025-07-12 16:08:16.098850 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.2.1.20250711 a09a8be1b711 19 hours ago 1.41GB 2025-07-12 16:08:16.098869 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.2.1.20250711 c0d28e8febb9 19 hours ago 1.41GB 2025-07-12 16:08:16.098880 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250711 e0ad0ae52bef 19 hours ago 1.29GB 2025-07-12 16:08:16.098891 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250711 b395cfe7f13f 19 hours ago 1.42GB 2025-07-12 16:08:16.098902 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250711 ee83c124eb76 19 hours ago 1.29GB 2025-07-12 16:08:16.098913 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250711 44e25b162470 19 hours ago 1.29GB 2025-07-12 16:08:16.098925 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250711 71f47d2b2def 19 hours ago 1.2GB 2025-07-12 16:08:16.098936 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250711 13b61cb4a5d2 19 hours ago 1.31GB 2025-07-12 16:08:16.098947 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250711 a030b794eaa9 19 hours ago 1.05GB 2025-07-12 16:08:16.098958 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250711 2d0954c30848 19 hours ago 1.05GB 2025-07-12 16:08:16.098969 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250711 f7fa0bcabe47 19 hours ago 1.05GB 2025-07-12 16:08:16.098980 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250711 4de726ebba0e 19 hours ago 1.06GB 2025-07-12 16:08:16.098991 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250711 a14c6ace0b24 19 hours ago 1.06GB 2025-07-12 16:08:16.099002 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250711 2a2b32cdb83f 19 hours ago 1.05GB 2025-07-12 16:08:16.099013 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250711 53889b0cb73d 19 hours ago 1.11GB 2025-07-12 16:08:16.099024 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250711 caf4f12b4799 19 hours ago 1.13GB 2025-07-12 16:08:16.099035 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250711 3ba6da1abaea 19 hours ago 1.11GB 2025-07-12 16:08:16.099046 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.1.20250711 8377b7d24f73 19 hours ago 1.24GB 2025-07-12 16:08:16.099057 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 2 months ago 1.27GB 2025-07-12 16:08:16.436931 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-07-12 16:08:16.437482 | orchestrator | ++ semver 9.2.0 5.0.0 2025-07-12 16:08:16.503494 | orchestrator | 2025-07-12 16:08:16.503589 | orchestrator | ## Containers @ testbed-node-2 2025-07-12 16:08:16.503604 | orchestrator | 2025-07-12 16:08:16.503617 | orchestrator | + [[ 1 -eq -1 ]] 2025-07-12 16:08:16.503630 | orchestrator | + echo 2025-07-12 16:08:16.503642 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-07-12 16:08:16.503655 | orchestrator | + echo 2025-07-12 16:08:16.503666 | orchestrator | + osism container testbed-node-2 ps 2025-07-12 16:08:18.745607 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-07-12 16:08:18.745736 | orchestrator | df4c87b7b862 registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-07-12 16:08:18.745753 | orchestrator | 1ce3c53243ff registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-07-12 16:08:18.745765 | orchestrator | 7a11579893c2 registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-07-12 16:08:18.745800 | orchestrator | 14f5e2f448fb registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2025-07-12 16:08:18.745812 | orchestrator | e513f6b8b4a1 registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-07-12 16:08:18.745822 | orchestrator | db6d96c7d6f5 registry.osism.tech/kolla/release/grafana:12.0.2.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-07-12 16:08:18.745834 | orchestrator | 2ee5cb335a94 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-07-12 16:08:18.745845 | orchestrator | 892eedc66f0f registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-07-12 16:08:18.745856 | orchestrator | b8a08c789eb7 registry.osism.tech/kolla/release/placement-api:12.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-07-12 16:08:18.745866 | orchestrator | aaf523b976b3 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-07-12 16:08:18.745877 | orchestrator | 6ff7fc267593 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-07-12 16:08:18.745888 | orchestrator | 250f07f9df41 registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2025-07-12 16:08:18.745899 | orchestrator | f763b7528e66 registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-07-12 16:08:18.745909 | orchestrator | 9b9ed585f2f5 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2025-07-12 16:08:18.745920 | orchestrator | 9f28386ec757 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2025-07-12 16:08:18.745931 | orchestrator | 5d69412b49fa registry.osism.tech/kolla/release/designate-central:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-07-12 16:08:18.745942 | orchestrator | 868116222b42 registry.osism.tech/kolla/release/designate-api:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-07-12 16:08:18.745953 | orchestrator | 4d25cbbd59ef registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-07-12 16:08:18.745963 | orchestrator | 1fd63d54c5cf registry.osism.tech/kolla/release/nova-api:30.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2025-07-12 16:08:18.745992 | orchestrator | 380bcb991545 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-07-12 16:08:18.746004 | orchestrator | 62ca9a9b0005 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-07-12 16:08:18.746084 | orchestrator | b91bca088810 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-07-12 16:08:18.746097 | orchestrator | 2ecaa11985f2 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-07-12 16:08:18.746137 | orchestrator | 48e8547c86bf registry.osism.tech/kolla/release/glance-api:29.0.1.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2025-07-12 16:08:18.746151 | orchestrator | 2e7a3293d312 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-07-12 16:08:18.746167 | orchestrator | 7b91d50ee96e registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2025-07-12 16:08:18.746180 | orchestrator | e00284349eec registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-07-12 16:08:18.746192 | orchestrator | c06b51140a21 registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-07-12 16:08:18.746205 | orchestrator | 91144a010a90 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-07-12 16:08:18.746217 | orchestrator | fd00ad544a94 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-07-12 16:08:18.746229 | orchestrator | 35a9afeafaca registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-07-12 16:08:18.746241 | orchestrator | ef483a683fa9 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-2 2025-07-12 16:08:18.746262 | orchestrator | 687de919baed registry.osism.tech/kolla/release/keystone:26.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2025-07-12 16:08:18.746275 | orchestrator | 14cd915ebcf1 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-07-12 16:08:18.746287 | orchestrator | 8d605d93a767 registry.osism.tech/kolla/release/horizon:25.1.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2025-07-12 16:08:18.746299 | orchestrator | f5b915086ce7 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-07-12 16:08:18.746311 | orchestrator | 61563f1e99de registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2025-07-12 16:08:18.746323 | orchestrator | 9c40fd0590ea registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2025-07-12 16:08:18.746336 | orchestrator | 75d2fa68e57b registry.osism.tech/kolla/release/opensearch:2.19.2.20250711 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2025-07-12 16:08:18.746355 | orchestrator | 685f7f1f49b0 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-2 2025-07-12 16:08:18.746379 | orchestrator | c1b51be7e82f registry.osism.tech/kolla/release/keepalived:2.2.7.20250711 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-07-12 16:08:18.746397 | orchestrator | 434ce3c90d50 registry.osism.tech/kolla/release/proxysql:2.7.3.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-07-12 16:08:18.746410 | orchestrator | 047ddfeb6716 registry.osism.tech/kolla/release/haproxy:2.6.12.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-07-12 16:08:18.746423 | orchestrator | 79c889b8777a registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_northd 2025-07-12 16:08:18.746435 | orchestrator | 32d457cfc43e registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_sb_db 2025-07-12 16:08:18.746447 | orchestrator | 35d9cdc7701d registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_nb_db 2025-07-12 16:08:18.746459 | orchestrator | 72c438068d18 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2025-07-12 16:08:18.746472 | orchestrator | dbba10d72534 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2025-07-12 16:08:18.746484 | orchestrator | b6071a3711d5 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-2 2025-07-12 16:08:18.746495 | orchestrator | b1104f112e5b registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711 "dumb-init --single-…" 28 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2025-07-12 16:08:18.746506 | orchestrator | e65a837fcbc0 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2025-07-12 16:08:18.746516 | orchestrator | 32c7d6a0e793 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis_sentinel 2025-07-12 16:08:18.746527 | orchestrator | 715cdd85322d registry.osism.tech/kolla/release/redis:7.0.15.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis 2025-07-12 16:08:18.746538 | orchestrator | 3964ad38090c registry.osism.tech/kolla/release/memcached:1.6.18.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) memcached 2025-07-12 16:08:18.746549 | orchestrator | 7fc1575e65d0 registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-07-12 16:08:18.746560 | orchestrator | fa059a06d541 registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-07-12 16:08:18.746570 | orchestrator | 83e961b0bca9 registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-07-12 16:08:19.059771 | orchestrator | 2025-07-12 16:08:19.059885 | orchestrator | ## Images @ testbed-node-2 2025-07-12 16:08:19.059902 | orchestrator | 2025-07-12 16:08:19.059915 | orchestrator | + echo 2025-07-12 16:08:19.059927 | orchestrator | + echo '## Images @ testbed-node-2' 2025-07-12 16:08:19.059940 | orchestrator | + echo 2025-07-12 16:08:19.059951 | orchestrator | + osism container testbed-node-2 images 2025-07-12 16:08:21.271024 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-07-12 16:08:21.271169 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 19 hours ago 628MB 2025-07-12 16:08:21.271188 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250711 c7f6abdb2516 19 hours ago 329MB 2025-07-12 16:08:21.271200 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250711 0a9fd950fe86 19 hours ago 326MB 2025-07-12 16:08:21.271211 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250711 d8c44fac73c2 19 hours ago 1.59GB 2025-07-12 16:08:21.271222 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250711 db87020f3b90 19 hours ago 1.55GB 2025-07-12 16:08:21.271233 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250711 4c6eaa052643 19 hours ago 417MB 2025-07-12 16:08:21.271244 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250711 cd87896ace76 19 hours ago 318MB 2025-07-12 16:08:21.271255 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 19 hours ago 746MB 2025-07-12 16:08:21.271266 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250711 4ce47f209c9b 19 hours ago 375MB 2025-07-12 16:08:21.271277 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.2.20250711 f4164dfd1b02 19 hours ago 1.01GB 2025-07-12 16:08:21.271288 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 19 hours ago 318MB 2025-07-12 16:08:21.271298 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250711 15f29551e6ce 19 hours ago 361MB 2025-07-12 16:08:21.271309 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250711 ea9ea8f197d8 19 hours ago 361MB 2025-07-12 16:08:21.271320 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250711 d4ae4a297d3b 19 hours ago 1.21GB 2025-07-12 16:08:21.271330 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250711 142dafde994c 19 hours ago 353MB 2025-07-12 16:08:21.271341 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 19 hours ago 410MB 2025-07-12 16:08:21.271352 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250711 62e13ec7689a 19 hours ago 344MB 2025-07-12 16:08:21.271383 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 19 hours ago 358MB 2025-07-12 16:08:21.271395 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250711 834c4c2dcd78 19 hours ago 351MB 2025-07-12 16:08:21.271405 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250711 534f393a19e2 19 hours ago 324MB 2025-07-12 16:08:21.271416 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250711 d7d5c3586026 19 hours ago 324MB 2025-07-12 16:08:21.271426 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250711 5892b19e1064 19 hours ago 590MB 2025-07-12 16:08:21.271437 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250711 28654474dfe5 19 hours ago 946MB 2025-07-12 16:08:21.271448 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250711 65e36d1176bd 19 hours ago 947MB 2025-07-12 16:08:21.271595 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250711 58ad45688234 19 hours ago 947MB 2025-07-12 16:08:21.271614 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250711 affa47a97549 19 hours ago 946MB 2025-07-12 16:08:21.271627 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250711 06deffb77b4f 19 hours ago 1.1GB 2025-07-12 16:08:21.271640 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250711 02867223fb33 19 hours ago 1.1GB 2025-07-12 16:08:21.271652 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250711 6146c08f2b76 19 hours ago 1.12GB 2025-07-12 16:08:21.271665 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250711 6d529ee19c1c 19 hours ago 1.1GB 2025-07-12 16:08:21.271678 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250711 b1ed239b634f 19 hours ago 1.12GB 2025-07-12 16:08:21.271690 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250711 65a4d0afbb1c 19 hours ago 1.15GB 2025-07-12 16:08:21.271703 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250711 2b6bd346ad18 19 hours ago 1.04GB 2025-07-12 16:08:21.271716 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250711 1b7dd2682590 19 hours ago 1.06GB 2025-07-12 16:08:21.271728 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250711 e475391ce44d 19 hours ago 1.06GB 2025-07-12 16:08:21.271741 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250711 09290580fa03 19 hours ago 1.06GB 2025-07-12 16:08:21.271751 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.2.1.20250711 a09a8be1b711 19 hours ago 1.41GB 2025-07-12 16:08:21.271762 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.2.1.20250711 c0d28e8febb9 19 hours ago 1.41GB 2025-07-12 16:08:21.271773 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250711 e0ad0ae52bef 19 hours ago 1.29GB 2025-07-12 16:08:21.271790 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250711 b395cfe7f13f 19 hours ago 1.42GB 2025-07-12 16:08:21.271802 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250711 ee83c124eb76 19 hours ago 1.29GB 2025-07-12 16:08:21.271812 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250711 44e25b162470 19 hours ago 1.29GB 2025-07-12 16:08:21.271823 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250711 71f47d2b2def 19 hours ago 1.2GB 2025-07-12 16:08:21.271834 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250711 13b61cb4a5d2 19 hours ago 1.31GB 2025-07-12 16:08:21.271845 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250711 a030b794eaa9 19 hours ago 1.05GB 2025-07-12 16:08:21.271856 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250711 2d0954c30848 19 hours ago 1.05GB 2025-07-12 16:08:21.271867 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250711 f7fa0bcabe47 19 hours ago 1.05GB 2025-07-12 16:08:21.271878 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250711 4de726ebba0e 19 hours ago 1.06GB 2025-07-12 16:08:21.271889 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250711 a14c6ace0b24 19 hours ago 1.06GB 2025-07-12 16:08:21.271900 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250711 2a2b32cdb83f 19 hours ago 1.05GB 2025-07-12 16:08:21.271919 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250711 53889b0cb73d 19 hours ago 1.11GB 2025-07-12 16:08:21.271931 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250711 caf4f12b4799 19 hours ago 1.13GB 2025-07-12 16:08:21.271941 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250711 3ba6da1abaea 19 hours ago 1.11GB 2025-07-12 16:08:21.271952 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.1.20250711 8377b7d24f73 19 hours ago 1.24GB 2025-07-12 16:08:21.271963 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 2 months ago 1.27GB 2025-07-12 16:08:21.530988 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-07-12 16:08:21.536397 | orchestrator | + set -e 2025-07-12 16:08:21.536442 | orchestrator | + source /opt/manager-vars.sh 2025-07-12 16:08:21.538004 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-12 16:08:21.538069 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-12 16:08:21.538081 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-12 16:08:21.538092 | orchestrator | ++ CEPH_VERSION=reef 2025-07-12 16:08:21.538130 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-12 16:08:21.538144 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-12 16:08:21.538155 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-07-12 16:08:21.538166 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-07-12 16:08:21.538177 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-12 16:08:21.538188 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-12 16:08:21.538199 | orchestrator | ++ export ARA=false 2025-07-12 16:08:21.538210 | orchestrator | ++ ARA=false 2025-07-12 16:08:21.538221 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-12 16:08:21.538232 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-12 16:08:21.538243 | orchestrator | ++ export TEMPEST=false 2025-07-12 16:08:21.538254 | orchestrator | ++ TEMPEST=false 2025-07-12 16:08:21.538265 | orchestrator | ++ export IS_ZUUL=true 2025-07-12 16:08:21.538275 | orchestrator | ++ IS_ZUUL=true 2025-07-12 16:08:21.538287 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.204 2025-07-12 16:08:21.538302 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.204 2025-07-12 16:08:21.538314 | orchestrator | ++ export EXTERNAL_API=false 2025-07-12 16:08:21.538325 | orchestrator | ++ EXTERNAL_API=false 2025-07-12 16:08:21.538336 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-12 16:08:21.538346 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-12 16:08:21.538357 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-12 16:08:21.538368 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-12 16:08:21.538379 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-12 16:08:21.538390 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-12 16:08:21.538401 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-07-12 16:08:21.538413 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-07-12 16:08:21.547326 | orchestrator | + set -e 2025-07-12 16:08:21.547363 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-12 16:08:21.547375 | orchestrator | ++ export INTERACTIVE=false 2025-07-12 16:08:21.547421 | orchestrator | ++ INTERACTIVE=false 2025-07-12 16:08:21.547435 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-12 16:08:21.547446 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-12 16:08:21.547513 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-07-12 16:08:21.548735 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-07-12 16:08:21.554811 | orchestrator | 2025-07-12 16:08:21.554858 | orchestrator | # Ceph status 2025-07-12 16:08:21.554871 | orchestrator | 2025-07-12 16:08:21.554883 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-07-12 16:08:21.554895 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-07-12 16:08:21.554907 | orchestrator | + echo 2025-07-12 16:08:21.554919 | orchestrator | + echo '# Ceph status' 2025-07-12 16:08:21.554931 | orchestrator | + echo 2025-07-12 16:08:21.554943 | orchestrator | + ceph -s 2025-07-12 16:08:22.137653 | orchestrator | cluster: 2025-07-12 16:08:22.137748 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-07-12 16:08:22.137761 | orchestrator | health: HEALTH_OK 2025-07-12 16:08:22.137773 | orchestrator | 2025-07-12 16:08:22.137783 | orchestrator | services: 2025-07-12 16:08:22.137794 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 27m) 2025-07-12 16:08:22.137816 | orchestrator | mgr: testbed-node-2(active, since 14m), standbys: testbed-node-1, testbed-node-0 2025-07-12 16:08:22.137850 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-07-12 16:08:22.137860 | orchestrator | osd: 6 osds: 6 up (since 23m), 6 in (since 24m) 2025-07-12 16:08:22.137871 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-07-12 16:08:22.137881 | orchestrator | 2025-07-12 16:08:22.137890 | orchestrator | data: 2025-07-12 16:08:22.137900 | orchestrator | volumes: 1/1 healthy 2025-07-12 16:08:22.137910 | orchestrator | pools: 14 pools, 401 pgs 2025-07-12 16:08:22.137920 | orchestrator | objects: 524 objects, 2.2 GiB 2025-07-12 16:08:22.137930 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-07-12 16:08:22.137939 | orchestrator | pgs: 401 active+clean 2025-07-12 16:08:22.137949 | orchestrator | 2025-07-12 16:08:22.182755 | orchestrator | 2025-07-12 16:08:22.182840 | orchestrator | # Ceph versions 2025-07-12 16:08:22.182853 | orchestrator | 2025-07-12 16:08:22.182865 | orchestrator | + echo 2025-07-12 16:08:22.182876 | orchestrator | + echo '# Ceph versions' 2025-07-12 16:08:22.182888 | orchestrator | + echo 2025-07-12 16:08:22.182899 | orchestrator | + ceph versions 2025-07-12 16:08:22.749038 | orchestrator | { 2025-07-12 16:08:22.749166 | orchestrator | "mon": { 2025-07-12 16:08:22.749182 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-07-12 16:08:22.749195 | orchestrator | }, 2025-07-12 16:08:22.749206 | orchestrator | "mgr": { 2025-07-12 16:08:22.749218 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-07-12 16:08:22.749229 | orchestrator | }, 2025-07-12 16:08:22.749240 | orchestrator | "osd": { 2025-07-12 16:08:22.749251 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-07-12 16:08:22.749262 | orchestrator | }, 2025-07-12 16:08:22.749273 | orchestrator | "mds": { 2025-07-12 16:08:22.749284 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-07-12 16:08:22.749295 | orchestrator | }, 2025-07-12 16:08:22.749305 | orchestrator | "rgw": { 2025-07-12 16:08:22.749316 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-07-12 16:08:22.749327 | orchestrator | }, 2025-07-12 16:08:22.749338 | orchestrator | "overall": { 2025-07-12 16:08:22.749349 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-07-12 16:08:22.749361 | orchestrator | } 2025-07-12 16:08:22.749371 | orchestrator | } 2025-07-12 16:08:22.799440 | orchestrator | 2025-07-12 16:08:22.799519 | orchestrator | # Ceph OSD tree 2025-07-12 16:08:22.799532 | orchestrator | 2025-07-12 16:08:22.799545 | orchestrator | + echo 2025-07-12 16:08:22.799556 | orchestrator | + echo '# Ceph OSD tree' 2025-07-12 16:08:22.799568 | orchestrator | + echo 2025-07-12 16:08:22.799579 | orchestrator | + ceph osd df tree 2025-07-12 16:08:23.342074 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-07-12 16:08:23.342217 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2025-07-12 16:08:23.342234 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2025-07-12 16:08:23.342262 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 7.08 1.20 201 up osd.0 2025-07-12 16:08:23.342274 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 972 MiB 899 MiB 1 KiB 74 MiB 19 GiB 4.75 0.80 189 up osd.5 2025-07-12 16:08:23.342285 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-07-12 16:08:23.342297 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.32 1.07 190 up osd.1 2025-07-12 16:08:23.342308 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 70 MiB 19 GiB 5.51 0.93 202 up osd.4 2025-07-12 16:08:23.342319 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-07-12 16:08:23.342330 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 6.73 1.14 188 up osd.2 2025-07-12 16:08:23.342362 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.0 GiB 971 MiB 1 KiB 74 MiB 19 GiB 5.10 0.86 200 up osd.3 2025-07-12 16:08:23.342373 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2025-07-12 16:08:23.342384 | orchestrator | MIN/MAX VAR: 0.80/1.20 STDDEV: 0.85 2025-07-12 16:08:23.397270 | orchestrator | 2025-07-12 16:08:23.397384 | orchestrator | # Ceph monitor status 2025-07-12 16:08:23.397409 | orchestrator | 2025-07-12 16:08:23.397431 | orchestrator | + echo 2025-07-12 16:08:23.397445 | orchestrator | + echo '# Ceph monitor status' 2025-07-12 16:08:23.397457 | orchestrator | + echo 2025-07-12 16:08:23.397469 | orchestrator | + ceph mon stat 2025-07-12 16:08:23.965163 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 6, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-07-12 16:08:24.011770 | orchestrator | 2025-07-12 16:08:24.011860 | orchestrator | # Ceph quorum status 2025-07-12 16:08:24.011875 | orchestrator | 2025-07-12 16:08:24.011887 | orchestrator | + echo 2025-07-12 16:08:24.011899 | orchestrator | + echo '# Ceph quorum status' 2025-07-12 16:08:24.011910 | orchestrator | + echo 2025-07-12 16:08:24.013210 | orchestrator | + ceph quorum_status 2025-07-12 16:08:24.013240 | orchestrator | + jq 2025-07-12 16:08:24.660530 | orchestrator | { 2025-07-12 16:08:24.660834 | orchestrator | "election_epoch": 6, 2025-07-12 16:08:24.660853 | orchestrator | "quorum": [ 2025-07-12 16:08:24.660865 | orchestrator | 0, 2025-07-12 16:08:24.660876 | orchestrator | 1, 2025-07-12 16:08:24.660886 | orchestrator | 2 2025-07-12 16:08:24.660897 | orchestrator | ], 2025-07-12 16:08:24.660907 | orchestrator | "quorum_names": [ 2025-07-12 16:08:24.660918 | orchestrator | "testbed-node-0", 2025-07-12 16:08:24.660929 | orchestrator | "testbed-node-1", 2025-07-12 16:08:24.660939 | orchestrator | "testbed-node-2" 2025-07-12 16:08:24.660950 | orchestrator | ], 2025-07-12 16:08:24.660961 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-07-12 16:08:24.660973 | orchestrator | "quorum_age": 1632, 2025-07-12 16:08:24.660983 | orchestrator | "features": { 2025-07-12 16:08:24.660994 | orchestrator | "quorum_con": "4540138322906710015", 2025-07-12 16:08:24.661004 | orchestrator | "quorum_mon": [ 2025-07-12 16:08:24.661015 | orchestrator | "kraken", 2025-07-12 16:08:24.661026 | orchestrator | "luminous", 2025-07-12 16:08:24.661036 | orchestrator | "mimic", 2025-07-12 16:08:24.661047 | orchestrator | "osdmap-prune", 2025-07-12 16:08:24.661058 | orchestrator | "nautilus", 2025-07-12 16:08:24.661070 | orchestrator | "octopus", 2025-07-12 16:08:24.661088 | orchestrator | "pacific", 2025-07-12 16:08:24.661132 | orchestrator | "elector-pinging", 2025-07-12 16:08:24.661151 | orchestrator | "quincy", 2025-07-12 16:08:24.661171 | orchestrator | "reef" 2025-07-12 16:08:24.661189 | orchestrator | ] 2025-07-12 16:08:24.661203 | orchestrator | }, 2025-07-12 16:08:24.661214 | orchestrator | "monmap": { 2025-07-12 16:08:24.661225 | orchestrator | "epoch": 1, 2025-07-12 16:08:24.661236 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-07-12 16:08:24.661248 | orchestrator | "modified": "2025-07-12T15:40:49.814368Z", 2025-07-12 16:08:24.661259 | orchestrator | "created": "2025-07-12T15:40:49.814368Z", 2025-07-12 16:08:24.661270 | orchestrator | "min_mon_release": 18, 2025-07-12 16:08:24.661280 | orchestrator | "min_mon_release_name": "reef", 2025-07-12 16:08:24.661291 | orchestrator | "election_strategy": 1, 2025-07-12 16:08:24.661302 | orchestrator | "disallowed_leaders: ": "", 2025-07-12 16:08:24.661312 | orchestrator | "stretch_mode": false, 2025-07-12 16:08:24.661323 | orchestrator | "tiebreaker_mon": "", 2025-07-12 16:08:24.661333 | orchestrator | "removed_ranks: ": "", 2025-07-12 16:08:24.661344 | orchestrator | "features": { 2025-07-12 16:08:24.661354 | orchestrator | "persistent": [ 2025-07-12 16:08:24.661365 | orchestrator | "kraken", 2025-07-12 16:08:24.661375 | orchestrator | "luminous", 2025-07-12 16:08:24.661388 | orchestrator | "mimic", 2025-07-12 16:08:24.661401 | orchestrator | "osdmap-prune", 2025-07-12 16:08:24.661413 | orchestrator | "nautilus", 2025-07-12 16:08:24.661424 | orchestrator | "octopus", 2025-07-12 16:08:24.661437 | orchestrator | "pacific", 2025-07-12 16:08:24.661449 | orchestrator | "elector-pinging", 2025-07-12 16:08:24.661460 | orchestrator | "quincy", 2025-07-12 16:08:24.661473 | orchestrator | "reef" 2025-07-12 16:08:24.661485 | orchestrator | ], 2025-07-12 16:08:24.661497 | orchestrator | "optional": [] 2025-07-12 16:08:24.661535 | orchestrator | }, 2025-07-12 16:08:24.661547 | orchestrator | "mons": [ 2025-07-12 16:08:24.661559 | orchestrator | { 2025-07-12 16:08:24.661571 | orchestrator | "rank": 0, 2025-07-12 16:08:24.661583 | orchestrator | "name": "testbed-node-0", 2025-07-12 16:08:24.661610 | orchestrator | "public_addrs": { 2025-07-12 16:08:24.661623 | orchestrator | "addrvec": [ 2025-07-12 16:08:24.661636 | orchestrator | { 2025-07-12 16:08:24.661648 | orchestrator | "type": "v2", 2025-07-12 16:08:24.661660 | orchestrator | "addr": "192.168.16.10:3300", 2025-07-12 16:08:24.661672 | orchestrator | "nonce": 0 2025-07-12 16:08:24.661684 | orchestrator | }, 2025-07-12 16:08:24.661697 | orchestrator | { 2025-07-12 16:08:24.661709 | orchestrator | "type": "v1", 2025-07-12 16:08:24.661722 | orchestrator | "addr": "192.168.16.10:6789", 2025-07-12 16:08:24.661734 | orchestrator | "nonce": 0 2025-07-12 16:08:24.661745 | orchestrator | } 2025-07-12 16:08:24.661756 | orchestrator | ] 2025-07-12 16:08:24.661766 | orchestrator | }, 2025-07-12 16:08:24.661777 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-07-12 16:08:24.661788 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-07-12 16:08:24.661798 | orchestrator | "priority": 0, 2025-07-12 16:08:24.661808 | orchestrator | "weight": 0, 2025-07-12 16:08:24.661819 | orchestrator | "crush_location": "{}" 2025-07-12 16:08:24.661829 | orchestrator | }, 2025-07-12 16:08:24.661840 | orchestrator | { 2025-07-12 16:08:24.661850 | orchestrator | "rank": 1, 2025-07-12 16:08:24.661861 | orchestrator | "name": "testbed-node-1", 2025-07-12 16:08:24.661871 | orchestrator | "public_addrs": { 2025-07-12 16:08:24.661882 | orchestrator | "addrvec": [ 2025-07-12 16:08:24.661892 | orchestrator | { 2025-07-12 16:08:24.661903 | orchestrator | "type": "v2", 2025-07-12 16:08:24.661913 | orchestrator | "addr": "192.168.16.11:3300", 2025-07-12 16:08:24.661924 | orchestrator | "nonce": 0 2025-07-12 16:08:24.661934 | orchestrator | }, 2025-07-12 16:08:24.661945 | orchestrator | { 2025-07-12 16:08:24.661955 | orchestrator | "type": "v1", 2025-07-12 16:08:24.661966 | orchestrator | "addr": "192.168.16.11:6789", 2025-07-12 16:08:24.661976 | orchestrator | "nonce": 0 2025-07-12 16:08:24.661987 | orchestrator | } 2025-07-12 16:08:24.661997 | orchestrator | ] 2025-07-12 16:08:24.662008 | orchestrator | }, 2025-07-12 16:08:24.662077 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-07-12 16:08:24.662090 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-07-12 16:08:24.662170 | orchestrator | "priority": 0, 2025-07-12 16:08:24.662184 | orchestrator | "weight": 0, 2025-07-12 16:08:24.662195 | orchestrator | "crush_location": "{}" 2025-07-12 16:08:24.662206 | orchestrator | }, 2025-07-12 16:08:24.662216 | orchestrator | { 2025-07-12 16:08:24.662227 | orchestrator | "rank": 2, 2025-07-12 16:08:24.662238 | orchestrator | "name": "testbed-node-2", 2025-07-12 16:08:24.662248 | orchestrator | "public_addrs": { 2025-07-12 16:08:24.662259 | orchestrator | "addrvec": [ 2025-07-12 16:08:24.662269 | orchestrator | { 2025-07-12 16:08:24.662280 | orchestrator | "type": "v2", 2025-07-12 16:08:24.662291 | orchestrator | "addr": "192.168.16.12:3300", 2025-07-12 16:08:24.662301 | orchestrator | "nonce": 0 2025-07-12 16:08:24.662312 | orchestrator | }, 2025-07-12 16:08:24.662323 | orchestrator | { 2025-07-12 16:08:24.662334 | orchestrator | "type": "v1", 2025-07-12 16:08:24.662345 | orchestrator | "addr": "192.168.16.12:6789", 2025-07-12 16:08:24.662355 | orchestrator | "nonce": 0 2025-07-12 16:08:24.662365 | orchestrator | } 2025-07-12 16:08:24.662375 | orchestrator | ] 2025-07-12 16:08:24.662384 | orchestrator | }, 2025-07-12 16:08:24.662393 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-07-12 16:08:24.662403 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-07-12 16:08:24.662413 | orchestrator | "priority": 0, 2025-07-12 16:08:24.662422 | orchestrator | "weight": 0, 2025-07-12 16:08:24.662431 | orchestrator | "crush_location": "{}" 2025-07-12 16:08:24.662441 | orchestrator | } 2025-07-12 16:08:24.662450 | orchestrator | ] 2025-07-12 16:08:24.662460 | orchestrator | } 2025-07-12 16:08:24.662470 | orchestrator | } 2025-07-12 16:08:24.662493 | orchestrator | 2025-07-12 16:08:24.662504 | orchestrator | # Ceph free space status 2025-07-12 16:08:24.662513 | orchestrator | + echo 2025-07-12 16:08:24.662523 | orchestrator | + echo '# Ceph free space status' 2025-07-12 16:08:24.662533 | orchestrator | 2025-07-12 16:08:24.662542 | orchestrator | + echo 2025-07-12 16:08:24.662561 | orchestrator | + ceph df 2025-07-12 16:08:25.233914 | orchestrator | --- RAW STORAGE --- 2025-07-12 16:08:25.234086 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-07-12 16:08:25.234157 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-07-12 16:08:25.234170 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-07-12 16:08:25.234182 | orchestrator | 2025-07-12 16:08:25.234194 | orchestrator | --- POOLS --- 2025-07-12 16:08:25.234205 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-07-12 16:08:25.234217 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2025-07-12 16:08:25.234228 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-07-12 16:08:25.234239 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-07-12 16:08:25.234251 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-07-12 16:08:25.234261 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-07-12 16:08:25.234272 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-07-12 16:08:25.234282 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-07-12 16:08:25.234293 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-07-12 16:08:25.234304 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2025-07-12 16:08:25.234314 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-07-12 16:08:25.234325 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-07-12 16:08:25.234335 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.94 35 GiB 2025-07-12 16:08:25.234346 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-07-12 16:08:25.234356 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-07-12 16:08:25.276823 | orchestrator | ++ semver 9.2.0 5.0.0 2025-07-12 16:08:25.323517 | orchestrator | + [[ 1 -eq -1 ]] 2025-07-12 16:08:25.323621 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-07-12 16:08:25.323642 | orchestrator | + osism apply facts 2025-07-12 16:08:37.377648 | orchestrator | 2025-07-12 16:08:37 | INFO  | Task a0f043ac-a13a-47ac-960d-c391e2c4153a (facts) was prepared for execution. 2025-07-12 16:08:37.378536 | orchestrator | 2025-07-12 16:08:37 | INFO  | It takes a moment until task a0f043ac-a13a-47ac-960d-c391e2c4153a (facts) has been started and output is visible here. 2025-07-12 16:08:50.230426 | orchestrator | 2025-07-12 16:08:50.230554 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-07-12 16:08:50.230577 | orchestrator | 2025-07-12 16:08:50.230599 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-07-12 16:08:50.230618 | orchestrator | Saturday 12 July 2025 16:08:41 +0000 (0:00:00.277) 0:00:00.277 ********* 2025-07-12 16:08:50.230638 | orchestrator | ok: [testbed-manager] 2025-07-12 16:08:50.230661 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:08:50.230681 | orchestrator | ok: [testbed-node-1] 2025-07-12 16:08:50.230697 | orchestrator | ok: [testbed-node-2] 2025-07-12 16:08:50.230709 | orchestrator | ok: [testbed-node-3] 2025-07-12 16:08:50.230720 | orchestrator | ok: [testbed-node-4] 2025-07-12 16:08:50.230730 | orchestrator | ok: [testbed-node-5] 2025-07-12 16:08:50.230741 | orchestrator | 2025-07-12 16:08:50.230752 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-07-12 16:08:50.230763 | orchestrator | Saturday 12 July 2025 16:08:42 +0000 (0:00:01.479) 0:00:01.756 ********* 2025-07-12 16:08:50.230774 | orchestrator | skipping: [testbed-manager] 2025-07-12 16:08:50.230786 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:08:50.230796 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:08:50.230807 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:08:50.230818 | orchestrator | skipping: [testbed-node-3] 2025-07-12 16:08:50.230828 | orchestrator | skipping: [testbed-node-4] 2025-07-12 16:08:50.230869 | orchestrator | skipping: [testbed-node-5] 2025-07-12 16:08:50.230881 | orchestrator | 2025-07-12 16:08:50.230891 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-12 16:08:50.230902 | orchestrator | 2025-07-12 16:08:50.230914 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-12 16:08:50.230934 | orchestrator | Saturday 12 July 2025 16:08:44 +0000 (0:00:01.245) 0:00:03.002 ********* 2025-07-12 16:08:50.230952 | orchestrator | ok: [testbed-node-1] 2025-07-12 16:08:50.230970 | orchestrator | ok: [testbed-node-2] 2025-07-12 16:08:50.230986 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:08:50.231004 | orchestrator | ok: [testbed-manager] 2025-07-12 16:08:50.231022 | orchestrator | ok: [testbed-node-3] 2025-07-12 16:08:50.231041 | orchestrator | ok: [testbed-node-4] 2025-07-12 16:08:50.231059 | orchestrator | ok: [testbed-node-5] 2025-07-12 16:08:50.231122 | orchestrator | 2025-07-12 16:08:50.231145 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-07-12 16:08:50.231164 | orchestrator | 2025-07-12 16:08:50.231182 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-07-12 16:08:50.231201 | orchestrator | Saturday 12 July 2025 16:08:49 +0000 (0:00:05.134) 0:00:08.136 ********* 2025-07-12 16:08:50.231220 | orchestrator | skipping: [testbed-manager] 2025-07-12 16:08:50.231239 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:08:50.231259 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:08:50.231278 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:08:50.231298 | orchestrator | skipping: [testbed-node-3] 2025-07-12 16:08:50.231319 | orchestrator | skipping: [testbed-node-4] 2025-07-12 16:08:50.231338 | orchestrator | skipping: [testbed-node-5] 2025-07-12 16:08:50.231356 | orchestrator | 2025-07-12 16:08:50.231376 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 16:08:50.231395 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 16:08:50.231415 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 16:08:50.231436 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 16:08:50.231455 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 16:08:50.231495 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 16:08:50.231508 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 16:08:50.231519 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 16:08:50.231529 | orchestrator | 2025-07-12 16:08:50.231540 | orchestrator | 2025-07-12 16:08:50.231552 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 16:08:50.231562 | orchestrator | Saturday 12 July 2025 16:08:49 +0000 (0:00:00.536) 0:00:08.673 ********* 2025-07-12 16:08:50.231573 | orchestrator | =============================================================================== 2025-07-12 16:08:50.231584 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.13s 2025-07-12 16:08:50.231594 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.48s 2025-07-12 16:08:50.231605 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.25s 2025-07-12 16:08:50.231616 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2025-07-12 16:08:50.504390 | orchestrator | + osism validate ceph-mons 2025-07-12 16:09:20.606493 | orchestrator | 2025-07-12 16:09:20.607318 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-07-12 16:09:20.607392 | orchestrator | 2025-07-12 16:09:20.607408 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-07-12 16:09:20.607421 | orchestrator | Saturday 12 July 2025 16:09:06 +0000 (0:00:00.326) 0:00:00.326 ********* 2025-07-12 16:09:20.607433 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 16:09:20.607443 | orchestrator | 2025-07-12 16:09:20.607454 | orchestrator | TASK [Create report output directory] ****************************************** 2025-07-12 16:09:20.607465 | orchestrator | Saturday 12 July 2025 16:09:07 +0000 (0:00:00.574) 0:00:00.901 ********* 2025-07-12 16:09:20.607476 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 16:09:20.607487 | orchestrator | 2025-07-12 16:09:20.607498 | orchestrator | TASK [Define report vars] ****************************************************** 2025-07-12 16:09:20.607509 | orchestrator | Saturday 12 July 2025 16:09:07 +0000 (0:00:00.724) 0:00:01.625 ********* 2025-07-12 16:09:20.607520 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:09:20.607532 | orchestrator | 2025-07-12 16:09:20.607543 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-07-12 16:09:20.607553 | orchestrator | Saturday 12 July 2025 16:09:08 +0000 (0:00:00.200) 0:00:01.825 ********* 2025-07-12 16:09:20.607564 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:09:20.607575 | orchestrator | ok: [testbed-node-1] 2025-07-12 16:09:20.607585 | orchestrator | ok: [testbed-node-2] 2025-07-12 16:09:20.607596 | orchestrator | 2025-07-12 16:09:20.607607 | orchestrator | TASK [Get container info] ****************************************************** 2025-07-12 16:09:20.607631 | orchestrator | Saturday 12 July 2025 16:09:08 +0000 (0:00:00.244) 0:00:02.070 ********* 2025-07-12 16:09:20.607643 | orchestrator | ok: [testbed-node-1] 2025-07-12 16:09:20.607654 | orchestrator | ok: [testbed-node-2] 2025-07-12 16:09:20.607664 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:09:20.607675 | orchestrator | 2025-07-12 16:09:20.607685 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-07-12 16:09:20.607696 | orchestrator | Saturday 12 July 2025 16:09:09 +0000 (0:00:00.937) 0:00:03.007 ********* 2025-07-12 16:09:20.607707 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:09:20.607718 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:09:20.607728 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:09:20.607739 | orchestrator | 2025-07-12 16:09:20.607750 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-07-12 16:09:20.607760 | orchestrator | Saturday 12 July 2025 16:09:09 +0000 (0:00:00.244) 0:00:03.251 ********* 2025-07-12 16:09:20.607771 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:09:20.607781 | orchestrator | ok: [testbed-node-1] 2025-07-12 16:09:20.607792 | orchestrator | ok: [testbed-node-2] 2025-07-12 16:09:20.607803 | orchestrator | 2025-07-12 16:09:20.607814 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-12 16:09:20.607824 | orchestrator | Saturday 12 July 2025 16:09:09 +0000 (0:00:00.378) 0:00:03.630 ********* 2025-07-12 16:09:20.607835 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:09:20.607846 | orchestrator | ok: [testbed-node-1] 2025-07-12 16:09:20.607856 | orchestrator | ok: [testbed-node-2] 2025-07-12 16:09:20.607867 | orchestrator | 2025-07-12 16:09:20.607878 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-07-12 16:09:20.607888 | orchestrator | Saturday 12 July 2025 16:09:10 +0000 (0:00:00.264) 0:00:03.894 ********* 2025-07-12 16:09:20.607899 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:09:20.607910 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:09:20.607920 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:09:20.607931 | orchestrator | 2025-07-12 16:09:20.607942 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-07-12 16:09:20.607952 | orchestrator | Saturday 12 July 2025 16:09:10 +0000 (0:00:00.241) 0:00:04.135 ********* 2025-07-12 16:09:20.607963 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:09:20.607974 | orchestrator | ok: [testbed-node-1] 2025-07-12 16:09:20.607992 | orchestrator | ok: [testbed-node-2] 2025-07-12 16:09:20.608003 | orchestrator | 2025-07-12 16:09:20.608014 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-12 16:09:20.608025 | orchestrator | Saturday 12 July 2025 16:09:10 +0000 (0:00:00.283) 0:00:04.419 ********* 2025-07-12 16:09:20.608036 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:09:20.608046 | orchestrator | 2025-07-12 16:09:20.608077 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-12 16:09:20.608088 | orchestrator | Saturday 12 July 2025 16:09:11 +0000 (0:00:00.460) 0:00:04.879 ********* 2025-07-12 16:09:20.608099 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:09:20.608110 | orchestrator | 2025-07-12 16:09:20.608120 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-12 16:09:20.608131 | orchestrator | Saturday 12 July 2025 16:09:11 +0000 (0:00:00.212) 0:00:05.091 ********* 2025-07-12 16:09:20.608142 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:09:20.608152 | orchestrator | 2025-07-12 16:09:20.608163 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 16:09:20.608174 | orchestrator | Saturday 12 July 2025 16:09:11 +0000 (0:00:00.227) 0:00:05.319 ********* 2025-07-12 16:09:20.608184 | orchestrator | 2025-07-12 16:09:20.608195 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 16:09:20.608206 | orchestrator | Saturday 12 July 2025 16:09:11 +0000 (0:00:00.064) 0:00:05.383 ********* 2025-07-12 16:09:20.608216 | orchestrator | 2025-07-12 16:09:20.608227 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 16:09:20.608238 | orchestrator | Saturday 12 July 2025 16:09:11 +0000 (0:00:00.061) 0:00:05.445 ********* 2025-07-12 16:09:20.608248 | orchestrator | 2025-07-12 16:09:20.608259 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-12 16:09:20.608270 | orchestrator | Saturday 12 July 2025 16:09:11 +0000 (0:00:00.065) 0:00:05.510 ********* 2025-07-12 16:09:20.608280 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:09:20.608291 | orchestrator | 2025-07-12 16:09:20.608301 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-07-12 16:09:20.608312 | orchestrator | Saturday 12 July 2025 16:09:12 +0000 (0:00:00.224) 0:00:05.735 ********* 2025-07-12 16:09:20.608323 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:09:20.608334 | orchestrator | 2025-07-12 16:09:20.608367 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-07-12 16:09:20.608382 | orchestrator | Saturday 12 July 2025 16:09:12 +0000 (0:00:00.189) 0:00:05.924 ********* 2025-07-12 16:09:20.608401 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:09:20.608419 | orchestrator | 2025-07-12 16:09:20.608436 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-07-12 16:09:20.608453 | orchestrator | Saturday 12 July 2025 16:09:12 +0000 (0:00:00.103) 0:00:06.027 ********* 2025-07-12 16:09:20.608471 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:09:20.608489 | orchestrator | 2025-07-12 16:09:20.608562 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-07-12 16:09:20.608575 | orchestrator | Saturday 12 July 2025 16:09:13 +0000 (0:00:01.610) 0:00:07.638 ********* 2025-07-12 16:09:20.608586 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:09:20.608596 | orchestrator | 2025-07-12 16:09:20.608607 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-07-12 16:09:20.608618 | orchestrator | Saturday 12 July 2025 16:09:14 +0000 (0:00:00.302) 0:00:07.941 ********* 2025-07-12 16:09:20.608629 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:09:20.608639 | orchestrator | 2025-07-12 16:09:20.608650 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-07-12 16:09:20.608661 | orchestrator | Saturday 12 July 2025 16:09:14 +0000 (0:00:00.326) 0:00:08.267 ********* 2025-07-12 16:09:20.608672 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:09:20.608682 | orchestrator | 2025-07-12 16:09:20.608702 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-07-12 16:09:20.608765 | orchestrator | Saturday 12 July 2025 16:09:14 +0000 (0:00:00.323) 0:00:08.591 ********* 2025-07-12 16:09:20.608778 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:09:20.608796 | orchestrator | 2025-07-12 16:09:20.608816 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-07-12 16:09:20.608834 | orchestrator | Saturday 12 July 2025 16:09:15 +0000 (0:00:00.290) 0:00:08.882 ********* 2025-07-12 16:09:20.608851 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:09:20.608868 | orchestrator | 2025-07-12 16:09:20.608886 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-07-12 16:09:20.608904 | orchestrator | Saturday 12 July 2025 16:09:15 +0000 (0:00:00.106) 0:00:08.988 ********* 2025-07-12 16:09:20.608921 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:09:20.608939 | orchestrator | 2025-07-12 16:09:20.608958 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-07-12 16:09:20.608977 | orchestrator | Saturday 12 July 2025 16:09:15 +0000 (0:00:00.119) 0:00:09.107 ********* 2025-07-12 16:09:20.608996 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:09:20.609007 | orchestrator | 2025-07-12 16:09:20.609018 | orchestrator | TASK [Gather status data] ****************************************************** 2025-07-12 16:09:20.609029 | orchestrator | Saturday 12 July 2025 16:09:15 +0000 (0:00:00.108) 0:00:09.216 ********* 2025-07-12 16:09:20.609039 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:09:20.609050 | orchestrator | 2025-07-12 16:09:20.609081 | orchestrator | TASK [Set health test data] **************************************************** 2025-07-12 16:09:20.609092 | orchestrator | Saturday 12 July 2025 16:09:16 +0000 (0:00:01.374) 0:00:10.590 ********* 2025-07-12 16:09:20.609103 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:09:20.609113 | orchestrator | 2025-07-12 16:09:20.609124 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-07-12 16:09:20.609135 | orchestrator | Saturday 12 July 2025 16:09:17 +0000 (0:00:00.308) 0:00:10.898 ********* 2025-07-12 16:09:20.609146 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:09:20.609156 | orchestrator | 2025-07-12 16:09:20.609168 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-07-12 16:09:20.609178 | orchestrator | Saturday 12 July 2025 16:09:17 +0000 (0:00:00.133) 0:00:11.032 ********* 2025-07-12 16:09:20.609189 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:09:20.609200 | orchestrator | 2025-07-12 16:09:20.609211 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-07-12 16:09:20.609222 | orchestrator | Saturday 12 July 2025 16:09:17 +0000 (0:00:00.130) 0:00:11.162 ********* 2025-07-12 16:09:20.609233 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:09:20.609290 | orchestrator | 2025-07-12 16:09:20.609303 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-07-12 16:09:20.609314 | orchestrator | Saturday 12 July 2025 16:09:17 +0000 (0:00:00.140) 0:00:11.302 ********* 2025-07-12 16:09:20.609325 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:09:20.609335 | orchestrator | 2025-07-12 16:09:20.609346 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-07-12 16:09:20.609357 | orchestrator | Saturday 12 July 2025 16:09:17 +0000 (0:00:00.295) 0:00:11.598 ********* 2025-07-12 16:09:20.609368 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 16:09:20.609379 | orchestrator | 2025-07-12 16:09:20.609390 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-07-12 16:09:20.609401 | orchestrator | Saturday 12 July 2025 16:09:18 +0000 (0:00:00.252) 0:00:11.850 ********* 2025-07-12 16:09:20.609411 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:09:20.609422 | orchestrator | 2025-07-12 16:09:20.609433 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-12 16:09:20.609444 | orchestrator | Saturday 12 July 2025 16:09:18 +0000 (0:00:00.229) 0:00:12.080 ********* 2025-07-12 16:09:20.609454 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 16:09:20.609465 | orchestrator | 2025-07-12 16:09:20.609476 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-12 16:09:20.609510 | orchestrator | Saturday 12 July 2025 16:09:19 +0000 (0:00:01.500) 0:00:13.580 ********* 2025-07-12 16:09:20.609530 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 16:09:20.609549 | orchestrator | 2025-07-12 16:09:20.609631 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-12 16:09:20.609645 | orchestrator | Saturday 12 July 2025 16:09:20 +0000 (0:00:00.267) 0:00:13.848 ********* 2025-07-12 16:09:20.609656 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 16:09:20.609667 | orchestrator | 2025-07-12 16:09:20.609692 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 16:09:22.862611 | orchestrator | Saturday 12 July 2025 16:09:20 +0000 (0:00:00.248) 0:00:14.096 ********* 2025-07-12 16:09:22.862725 | orchestrator | 2025-07-12 16:09:22.862744 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 16:09:22.862757 | orchestrator | Saturday 12 July 2025 16:09:20 +0000 (0:00:00.068) 0:00:14.165 ********* 2025-07-12 16:09:22.862768 | orchestrator | 2025-07-12 16:09:22.862779 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 16:09:22.862790 | orchestrator | Saturday 12 July 2025 16:09:20 +0000 (0:00:00.067) 0:00:14.233 ********* 2025-07-12 16:09:22.862805 | orchestrator | 2025-07-12 16:09:22.862816 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-07-12 16:09:22.862827 | orchestrator | Saturday 12 July 2025 16:09:20 +0000 (0:00:00.070) 0:00:14.303 ********* 2025-07-12 16:09:22.862838 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 16:09:22.862849 | orchestrator | 2025-07-12 16:09:22.862860 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-12 16:09:22.862871 | orchestrator | Saturday 12 July 2025 16:09:22 +0000 (0:00:01.463) 0:00:15.767 ********* 2025-07-12 16:09:22.862881 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-07-12 16:09:22.862893 | orchestrator |  "msg": [ 2025-07-12 16:09:22.862905 | orchestrator |  "Validator run completed.", 2025-07-12 16:09:22.862936 | orchestrator |  "You can find the report file here:", 2025-07-12 16:09:22.862953 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-07-12T16:09:07+00:00-report.json", 2025-07-12 16:09:22.862965 | orchestrator |  "on the following host:", 2025-07-12 16:09:22.862976 | orchestrator |  "testbed-manager" 2025-07-12 16:09:22.862987 | orchestrator |  ] 2025-07-12 16:09:22.862998 | orchestrator | } 2025-07-12 16:09:22.863009 | orchestrator | 2025-07-12 16:09:22.863020 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 16:09:22.863032 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-07-12 16:09:22.863045 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 16:09:22.863119 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 16:09:22.863133 | orchestrator | 2025-07-12 16:09:22.863144 | orchestrator | 2025-07-12 16:09:22.863157 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 16:09:22.863170 | orchestrator | Saturday 12 July 2025 16:09:22 +0000 (0:00:00.531) 0:00:16.298 ********* 2025-07-12 16:09:22.863182 | orchestrator | =============================================================================== 2025-07-12 16:09:22.863194 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.61s 2025-07-12 16:09:22.863206 | orchestrator | Aggregate test results step one ----------------------------------------- 1.50s 2025-07-12 16:09:22.863218 | orchestrator | Write report file ------------------------------------------------------- 1.46s 2025-07-12 16:09:22.863230 | orchestrator | Gather status data ------------------------------------------------------ 1.37s 2025-07-12 16:09:22.863263 | orchestrator | Get container info ------------------------------------------------------ 0.94s 2025-07-12 16:09:22.863276 | orchestrator | Create report output directory ------------------------------------------ 0.72s 2025-07-12 16:09:22.863288 | orchestrator | Get timestamp for report file ------------------------------------------- 0.57s 2025-07-12 16:09:22.863300 | orchestrator | Print report file information ------------------------------------------- 0.53s 2025-07-12 16:09:22.863312 | orchestrator | Aggregate test results step one ----------------------------------------- 0.46s 2025-07-12 16:09:22.863324 | orchestrator | Set test result to passed if container is existing ---------------------- 0.38s 2025-07-12 16:09:22.863337 | orchestrator | Fail quorum test if not all monitors are in quorum ---------------------- 0.33s 2025-07-12 16:09:22.863349 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.32s 2025-07-12 16:09:22.863362 | orchestrator | Set health test data ---------------------------------------------------- 0.31s 2025-07-12 16:09:22.863374 | orchestrator | Set quorum test data ---------------------------------------------------- 0.30s 2025-07-12 16:09:22.863386 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.30s 2025-07-12 16:09:22.863398 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.29s 2025-07-12 16:09:22.863410 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.28s 2025-07-12 16:09:22.863423 | orchestrator | Aggregate test results step two ----------------------------------------- 0.27s 2025-07-12 16:09:22.863435 | orchestrator | Prepare test data ------------------------------------------------------- 0.26s 2025-07-12 16:09:22.863447 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.25s 2025-07-12 16:09:23.124806 | orchestrator | + osism validate ceph-mgrs 2025-07-12 16:09:53.071555 | orchestrator | 2025-07-12 16:09:53.071670 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-07-12 16:09:53.071698 | orchestrator | 2025-07-12 16:09:53.071719 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-07-12 16:09:53.071739 | orchestrator | Saturday 12 July 2025 16:09:39 +0000 (0:00:00.398) 0:00:00.398 ********* 2025-07-12 16:09:53.071756 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 16:09:53.071767 | orchestrator | 2025-07-12 16:09:53.071778 | orchestrator | TASK [Create report output directory] ****************************************** 2025-07-12 16:09:53.071789 | orchestrator | Saturday 12 July 2025 16:09:39 +0000 (0:00:00.554) 0:00:00.952 ********* 2025-07-12 16:09:53.071800 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 16:09:53.071811 | orchestrator | 2025-07-12 16:09:53.071822 | orchestrator | TASK [Define report vars] ****************************************************** 2025-07-12 16:09:53.071832 | orchestrator | Saturday 12 July 2025 16:09:40 +0000 (0:00:00.709) 0:00:01.661 ********* 2025-07-12 16:09:53.071843 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:09:53.071855 | orchestrator | 2025-07-12 16:09:53.071867 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-07-12 16:09:53.071877 | orchestrator | Saturday 12 July 2025 16:09:40 +0000 (0:00:00.180) 0:00:01.841 ********* 2025-07-12 16:09:53.071888 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:09:53.071899 | orchestrator | ok: [testbed-node-1] 2025-07-12 16:09:53.071910 | orchestrator | ok: [testbed-node-2] 2025-07-12 16:09:53.071920 | orchestrator | 2025-07-12 16:09:53.071931 | orchestrator | TASK [Get container info] ****************************************************** 2025-07-12 16:09:53.071942 | orchestrator | Saturday 12 July 2025 16:09:40 +0000 (0:00:00.242) 0:00:02.084 ********* 2025-07-12 16:09:53.071953 | orchestrator | ok: [testbed-node-1] 2025-07-12 16:09:53.071963 | orchestrator | ok: [testbed-node-2] 2025-07-12 16:09:53.071974 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:09:53.071985 | orchestrator | 2025-07-12 16:09:53.071996 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-07-12 16:09:53.072007 | orchestrator | Saturday 12 July 2025 16:09:41 +0000 (0:00:00.929) 0:00:03.013 ********* 2025-07-12 16:09:53.072075 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:09:53.072106 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:09:53.072119 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:09:53.072133 | orchestrator | 2025-07-12 16:09:53.072145 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-07-12 16:09:53.072158 | orchestrator | Saturday 12 July 2025 16:09:42 +0000 (0:00:00.251) 0:00:03.265 ********* 2025-07-12 16:09:53.072170 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:09:53.072182 | orchestrator | ok: [testbed-node-1] 2025-07-12 16:09:53.072195 | orchestrator | ok: [testbed-node-2] 2025-07-12 16:09:53.072207 | orchestrator | 2025-07-12 16:09:53.072220 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-12 16:09:53.072232 | orchestrator | Saturday 12 July 2025 16:09:42 +0000 (0:00:00.375) 0:00:03.641 ********* 2025-07-12 16:09:53.072245 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:09:53.072257 | orchestrator | ok: [testbed-node-1] 2025-07-12 16:09:53.072270 | orchestrator | ok: [testbed-node-2] 2025-07-12 16:09:53.072282 | orchestrator | 2025-07-12 16:09:53.072294 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-07-12 16:09:53.072306 | orchestrator | Saturday 12 July 2025 16:09:42 +0000 (0:00:00.270) 0:00:03.911 ********* 2025-07-12 16:09:53.072319 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:09:53.072331 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:09:53.072344 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:09:53.072356 | orchestrator | 2025-07-12 16:09:53.072369 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-07-12 16:09:53.072382 | orchestrator | Saturday 12 July 2025 16:09:42 +0000 (0:00:00.294) 0:00:04.206 ********* 2025-07-12 16:09:53.072394 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:09:53.072406 | orchestrator | ok: [testbed-node-1] 2025-07-12 16:09:53.072418 | orchestrator | ok: [testbed-node-2] 2025-07-12 16:09:53.072429 | orchestrator | 2025-07-12 16:09:53.072440 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-12 16:09:53.072451 | orchestrator | Saturday 12 July 2025 16:09:43 +0000 (0:00:00.288) 0:00:04.494 ********* 2025-07-12 16:09:53.072462 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:09:53.072472 | orchestrator | 2025-07-12 16:09:53.072483 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-12 16:09:53.072494 | orchestrator | Saturday 12 July 2025 16:09:43 +0000 (0:00:00.612) 0:00:05.107 ********* 2025-07-12 16:09:53.072504 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:09:53.072515 | orchestrator | 2025-07-12 16:09:53.072526 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-12 16:09:53.072536 | orchestrator | Saturday 12 July 2025 16:09:44 +0000 (0:00:00.235) 0:00:05.343 ********* 2025-07-12 16:09:53.072547 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:09:53.072558 | orchestrator | 2025-07-12 16:09:53.072569 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 16:09:53.072579 | orchestrator | Saturday 12 July 2025 16:09:44 +0000 (0:00:00.258) 0:00:05.601 ********* 2025-07-12 16:09:53.072590 | orchestrator | 2025-07-12 16:09:53.072601 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 16:09:53.072611 | orchestrator | Saturday 12 July 2025 16:09:44 +0000 (0:00:00.068) 0:00:05.670 ********* 2025-07-12 16:09:53.072622 | orchestrator | 2025-07-12 16:09:53.072633 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 16:09:53.072644 | orchestrator | Saturday 12 July 2025 16:09:44 +0000 (0:00:00.068) 0:00:05.739 ********* 2025-07-12 16:09:53.072654 | orchestrator | 2025-07-12 16:09:53.072667 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-12 16:09:53.072685 | orchestrator | Saturday 12 July 2025 16:09:44 +0000 (0:00:00.089) 0:00:05.829 ********* 2025-07-12 16:09:53.072705 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:09:53.072724 | orchestrator | 2025-07-12 16:09:53.072745 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-07-12 16:09:53.072777 | orchestrator | Saturday 12 July 2025 16:09:44 +0000 (0:00:00.250) 0:00:06.079 ********* 2025-07-12 16:09:53.072791 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:09:53.072802 | orchestrator | 2025-07-12 16:09:53.072833 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-07-12 16:09:53.072844 | orchestrator | Saturday 12 July 2025 16:09:45 +0000 (0:00:00.249) 0:00:06.328 ********* 2025-07-12 16:09:53.072855 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:09:53.072866 | orchestrator | 2025-07-12 16:09:53.072877 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-07-12 16:09:53.072888 | orchestrator | Saturday 12 July 2025 16:09:45 +0000 (0:00:00.120) 0:00:06.449 ********* 2025-07-12 16:09:53.072898 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:09:53.072909 | orchestrator | 2025-07-12 16:09:53.072919 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-07-12 16:09:53.072930 | orchestrator | Saturday 12 July 2025 16:09:47 +0000 (0:00:02.109) 0:00:08.559 ********* 2025-07-12 16:09:53.072941 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:09:53.072951 | orchestrator | 2025-07-12 16:09:53.072962 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-07-12 16:09:53.072973 | orchestrator | Saturday 12 July 2025 16:09:47 +0000 (0:00:00.224) 0:00:08.783 ********* 2025-07-12 16:09:53.072983 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:09:53.072994 | orchestrator | 2025-07-12 16:09:53.073005 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-07-12 16:09:53.073016 | orchestrator | Saturday 12 July 2025 16:09:48 +0000 (0:00:00.664) 0:00:09.447 ********* 2025-07-12 16:09:53.073026 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:09:53.073062 | orchestrator | 2025-07-12 16:09:53.073074 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-07-12 16:09:53.073084 | orchestrator | Saturday 12 July 2025 16:09:48 +0000 (0:00:00.135) 0:00:09.582 ********* 2025-07-12 16:09:53.073095 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:09:53.073106 | orchestrator | 2025-07-12 16:09:53.073117 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-07-12 16:09:53.073127 | orchestrator | Saturday 12 July 2025 16:09:48 +0000 (0:00:00.149) 0:00:09.732 ********* 2025-07-12 16:09:53.073138 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 16:09:53.073149 | orchestrator | 2025-07-12 16:09:53.073159 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-07-12 16:09:53.073170 | orchestrator | Saturday 12 July 2025 16:09:48 +0000 (0:00:00.242) 0:00:09.974 ********* 2025-07-12 16:09:53.073181 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:09:53.073192 | orchestrator | 2025-07-12 16:09:53.073203 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-12 16:09:53.073214 | orchestrator | Saturday 12 July 2025 16:09:48 +0000 (0:00:00.246) 0:00:10.220 ********* 2025-07-12 16:09:53.073224 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 16:09:53.073248 | orchestrator | 2025-07-12 16:09:53.073260 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-12 16:09:53.073271 | orchestrator | Saturday 12 July 2025 16:09:50 +0000 (0:00:01.204) 0:00:11.425 ********* 2025-07-12 16:09:53.073281 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 16:09:53.073292 | orchestrator | 2025-07-12 16:09:53.073303 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-12 16:09:53.073314 | orchestrator | Saturday 12 July 2025 16:09:50 +0000 (0:00:00.267) 0:00:11.692 ********* 2025-07-12 16:09:53.073334 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 16:09:53.073346 | orchestrator | 2025-07-12 16:09:53.073357 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 16:09:53.073367 | orchestrator | Saturday 12 July 2025 16:09:50 +0000 (0:00:00.239) 0:00:11.932 ********* 2025-07-12 16:09:53.073378 | orchestrator | 2025-07-12 16:09:53.073389 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 16:09:53.073407 | orchestrator | Saturday 12 July 2025 16:09:50 +0000 (0:00:00.065) 0:00:11.998 ********* 2025-07-12 16:09:53.073418 | orchestrator | 2025-07-12 16:09:53.073429 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 16:09:53.073439 | orchestrator | Saturday 12 July 2025 16:09:50 +0000 (0:00:00.066) 0:00:12.065 ********* 2025-07-12 16:09:53.073450 | orchestrator | 2025-07-12 16:09:53.073461 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-07-12 16:09:53.073472 | orchestrator | Saturday 12 July 2025 16:09:50 +0000 (0:00:00.068) 0:00:12.133 ********* 2025-07-12 16:09:53.073482 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 16:09:53.073493 | orchestrator | 2025-07-12 16:09:53.073504 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-12 16:09:53.073514 | orchestrator | Saturday 12 July 2025 16:09:52 +0000 (0:00:01.758) 0:00:13.891 ********* 2025-07-12 16:09:53.073525 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-07-12 16:09:53.073536 | orchestrator |  "msg": [ 2025-07-12 16:09:53.073548 | orchestrator |  "Validator run completed.", 2025-07-12 16:09:53.073559 | orchestrator |  "You can find the report file here:", 2025-07-12 16:09:53.073570 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-07-12T16:09:39+00:00-report.json", 2025-07-12 16:09:53.073582 | orchestrator |  "on the following host:", 2025-07-12 16:09:53.073593 | orchestrator |  "testbed-manager" 2025-07-12 16:09:53.073603 | orchestrator |  ] 2025-07-12 16:09:53.073615 | orchestrator | } 2025-07-12 16:09:53.073626 | orchestrator | 2025-07-12 16:09:53.073637 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 16:09:53.073648 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-12 16:09:53.073660 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 16:09:53.073679 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 16:09:53.343113 | orchestrator | 2025-07-12 16:09:53.343220 | orchestrator | 2025-07-12 16:09:53.343235 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 16:09:53.343249 | orchestrator | Saturday 12 July 2025 16:09:53 +0000 (0:00:00.393) 0:00:14.284 ********* 2025-07-12 16:09:53.343260 | orchestrator | =============================================================================== 2025-07-12 16:09:53.343271 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.11s 2025-07-12 16:09:53.343282 | orchestrator | Write report file ------------------------------------------------------- 1.76s 2025-07-12 16:09:53.343292 | orchestrator | Aggregate test results step one ----------------------------------------- 1.20s 2025-07-12 16:09:53.343327 | orchestrator | Get container info ------------------------------------------------------ 0.93s 2025-07-12 16:09:53.343339 | orchestrator | Create report output directory ------------------------------------------ 0.71s 2025-07-12 16:09:53.343350 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.66s 2025-07-12 16:09:53.343361 | orchestrator | Aggregate test results step one ----------------------------------------- 0.61s 2025-07-12 16:09:53.343372 | orchestrator | Get timestamp for report file ------------------------------------------- 0.55s 2025-07-12 16:09:53.343383 | orchestrator | Print report file information ------------------------------------------- 0.39s 2025-07-12 16:09:53.343393 | orchestrator | Set test result to passed if container is existing ---------------------- 0.38s 2025-07-12 16:09:53.343404 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.29s 2025-07-12 16:09:53.343415 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.29s 2025-07-12 16:09:53.343448 | orchestrator | Prepare test data ------------------------------------------------------- 0.27s 2025-07-12 16:09:53.343459 | orchestrator | Aggregate test results step two ----------------------------------------- 0.27s 2025-07-12 16:09:53.343470 | orchestrator | Aggregate test results step three --------------------------------------- 0.26s 2025-07-12 16:09:53.343486 | orchestrator | Set test result to failed if container is missing ----------------------- 0.25s 2025-07-12 16:09:53.343497 | orchestrator | Print report file information ------------------------------------------- 0.25s 2025-07-12 16:09:53.343508 | orchestrator | Fail due to missing containers ------------------------------------------ 0.25s 2025-07-12 16:09:53.343518 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.25s 2025-07-12 16:09:53.343529 | orchestrator | Prepare test data for container existance test -------------------------- 0.24s 2025-07-12 16:09:53.592467 | orchestrator | + osism validate ceph-osds 2025-07-12 16:10:13.729142 | orchestrator | 2025-07-12 16:10:13.729256 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-07-12 16:10:13.729273 | orchestrator | 2025-07-12 16:10:13.729285 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-07-12 16:10:13.729297 | orchestrator | Saturday 12 July 2025 16:10:09 +0000 (0:00:00.438) 0:00:00.438 ********* 2025-07-12 16:10:13.729309 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 16:10:13.729320 | orchestrator | 2025-07-12 16:10:13.729331 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-12 16:10:13.729342 | orchestrator | Saturday 12 July 2025 16:10:10 +0000 (0:00:00.625) 0:00:01.063 ********* 2025-07-12 16:10:13.729353 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 16:10:13.729364 | orchestrator | 2025-07-12 16:10:13.729374 | orchestrator | TASK [Create report output directory] ****************************************** 2025-07-12 16:10:13.729385 | orchestrator | Saturday 12 July 2025 16:10:10 +0000 (0:00:00.216) 0:00:01.280 ********* 2025-07-12 16:10:13.729395 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 16:10:13.729406 | orchestrator | 2025-07-12 16:10:13.729420 | orchestrator | TASK [Define report vars] ****************************************************** 2025-07-12 16:10:13.729439 | orchestrator | Saturday 12 July 2025 16:10:11 +0000 (0:00:00.946) 0:00:02.227 ********* 2025-07-12 16:10:13.729452 | orchestrator | ok: [testbed-node-3] 2025-07-12 16:10:13.729464 | orchestrator | 2025-07-12 16:10:13.729475 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-07-12 16:10:13.729486 | orchestrator | Saturday 12 July 2025 16:10:11 +0000 (0:00:00.132) 0:00:02.359 ********* 2025-07-12 16:10:13.729496 | orchestrator | skipping: [testbed-node-3] 2025-07-12 16:10:13.729507 | orchestrator | 2025-07-12 16:10:13.729518 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-07-12 16:10:13.729529 | orchestrator | Saturday 12 July 2025 16:10:11 +0000 (0:00:00.116) 0:00:02.476 ********* 2025-07-12 16:10:13.729539 | orchestrator | skipping: [testbed-node-3] 2025-07-12 16:10:13.729550 | orchestrator | skipping: [testbed-node-4] 2025-07-12 16:10:13.729561 | orchestrator | skipping: [testbed-node-5] 2025-07-12 16:10:13.729571 | orchestrator | 2025-07-12 16:10:13.729582 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-07-12 16:10:13.729593 | orchestrator | Saturday 12 July 2025 16:10:12 +0000 (0:00:00.288) 0:00:02.764 ********* 2025-07-12 16:10:13.729603 | orchestrator | ok: [testbed-node-3] 2025-07-12 16:10:13.729614 | orchestrator | 2025-07-12 16:10:13.729624 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-07-12 16:10:13.729635 | orchestrator | Saturday 12 July 2025 16:10:12 +0000 (0:00:00.148) 0:00:02.913 ********* 2025-07-12 16:10:13.729647 | orchestrator | ok: [testbed-node-3] 2025-07-12 16:10:13.729659 | orchestrator | ok: [testbed-node-4] 2025-07-12 16:10:13.729671 | orchestrator | ok: [testbed-node-5] 2025-07-12 16:10:13.729683 | orchestrator | 2025-07-12 16:10:13.729696 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-07-12 16:10:13.729708 | orchestrator | Saturday 12 July 2025 16:10:12 +0000 (0:00:00.331) 0:00:03.245 ********* 2025-07-12 16:10:13.729745 | orchestrator | ok: [testbed-node-3] 2025-07-12 16:10:13.729758 | orchestrator | 2025-07-12 16:10:13.729770 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-12 16:10:13.729861 | orchestrator | Saturday 12 July 2025 16:10:13 +0000 (0:00:00.505) 0:00:03.751 ********* 2025-07-12 16:10:13.729873 | orchestrator | ok: [testbed-node-3] 2025-07-12 16:10:13.729886 | orchestrator | ok: [testbed-node-4] 2025-07-12 16:10:13.729898 | orchestrator | ok: [testbed-node-5] 2025-07-12 16:10:13.729910 | orchestrator | 2025-07-12 16:10:13.729922 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-07-12 16:10:13.729934 | orchestrator | Saturday 12 July 2025 16:10:13 +0000 (0:00:00.440) 0:00:04.191 ********* 2025-07-12 16:10:13.729949 | orchestrator | skipping: [testbed-node-3] => (item={'id': '16ae0dc768df3ab9c0443cd1dd598408ae152a0499e48338931de34d682a7185', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-07-12 16:10:13.729964 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e73057d7334da26693f30022c3c23699d458829caeb4133e39e44995ce3eec84', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-07-12 16:10:13.729977 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd15f7c407587afe918f937b36efffc748f0113f11e9b8ad5aaf964312e5a4513', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-07-12 16:10:13.729991 | orchestrator | skipping: [testbed-node-3] => (item={'id': '356c9a14c04b8878bc9995032ad255a6df6dc150f69f61e542de7465abbe21ea', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-07-12 16:10:13.730108 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2589c52fb7b4863fd4d1b7db98515540c5a9178d4f4b037624f3771803112026', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-07-12 16:10:13.730144 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd9c19848697003cf39e80d587ae3b4a3728cec2ca78c0781487c01f9ba341118', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-07-12 16:10:13.730156 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7063c68b7a048ceebbdaf97a30fa6341400dc898daa617c1ea77d0dafd4266b1', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-07-12 16:10:13.730178 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2e8c9e75b47738a6c495e05b638a083e288db2846ee4f3d54b2e3ed9835dcbfd', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-07-12 16:10:13.730189 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9232015c075a55cf30a0a98fd75f27ea1a5bdf981d838ba5ded8d3c8310c60d4', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-07-12 16:10:13.730200 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6bc89cae56cc8d6a8f37ff05592ab71ef4009867a618c92cf96d61a615d2195d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-07-12 16:10:13.730213 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd0b0ce9221df91fb05755c5b5075fea7465b429aa79f70a684723e97d149363f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2025-07-12 16:10:13.730236 | orchestrator | skipping: [testbed-node-3] => (item={'id': '63fcac130aef3cef1b2e0d2b2bb02166edd625f10812e45a403db6cb2184c603', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2025-07-12 16:10:13.730250 | orchestrator | ok: [testbed-node-3] => (item={'id': '212f4aa3b95b65f947a76470b4fa99fcdfbd99f61e291594f756ceecda7a5856', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-07-12 16:10:13.730262 | orchestrator | ok: [testbed-node-3] => (item={'id': 'c6933022909397fbbe11fbe6fb96a532d38da8ef1a1f37bc90cf5ce6fc64e8b0', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-07-12 16:10:13.730273 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'dc40d7bcf39346f042e9f89a0ede5f117aace672cbbe0b4068e502b17fb6075f', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-07-12 16:10:13.730284 | orchestrator | skipping: [testbed-node-3] => (item={'id': '54a5f0cc66a56919f801997c708e62efa68fac69c42d1f0e6483110bf17bbfd1', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-07-12 16:10:13.730296 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8724adbf3af60b9dd0fa6cc25d8185b7dacdc6c3926a5d16b2275b74feb2f1ab', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-07-12 16:10:13.730307 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9de66a89164cf857579cac65289259fa97331b2ae98a2957a29f80d39432ee3f', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2025-07-12 16:10:13.730318 | orchestrator | skipping: [testbed-node-3] => (item={'id': '49b17ab8671183335681dfc2beaa71fd10a64dd2b5c29fbcd53d4270794b401b', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 29 minutes'})  2025-07-12 16:10:13.730330 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2357bd4fabc7d6831fac6db9d0a12c4725a1b0ab4cd34e482efecee09fb42476', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2025-07-12 16:10:13.730390 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1f400a904c050dce3112d2ef0ba9a4842e413353ef2a3b2de739632983ba408f', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-07-12 16:10:13.842894 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3ecd441276e19cc9c803120b8b9f22cfc401e293f0f033da2794535894cc3ae9', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-07-12 16:10:13.842994 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b630d926f7e5e3d86b7fb5bc42923dda43136612526337db0eec3ea951871ef3', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-07-12 16:10:13.843009 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'dd6f94464e7e67d063b7c07df52721ef1975647115b54fdee6589990abfc15cb', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-07-12 16:10:13.843100 | orchestrator | skipping: [testbed-node-4] => (item={'id': '575401956f4c1f16b936be31956762159308fbedd457517c29145a8762b2e1ef', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-07-12 16:10:13.843113 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4f8b49cc80e29bad5a7d24a21a65e1e000348e2cbd0a6516bd40ebe39e6fcc40', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-07-12 16:10:13.843124 | orchestrator | skipping: [testbed-node-4] => (item={'id': '63ebdb95662e25f3b40d1170eeb047e5971bb1c5ac89f1ff9fa032086618bcf2', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-07-12 16:10:13.843137 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a915486e9245ea2b5a8a791343bda4c1d1324b7960c20fa112aabc6308de0962', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-07-12 16:10:13.843148 | orchestrator | skipping: [testbed-node-4] => (item={'id': '25d2d6291acff203d9a286008444827da3eab017d9849acd38356e94e63ece60', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-07-12 16:10:13.843159 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e1e3239a0b9b2295da1dcc2a240bed7474b2257005d74988b909ced9153a3241', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-07-12 16:10:13.843171 | orchestrator | skipping: [testbed-node-4] => (item={'id': '35c18340697ce92f7656c9943ac94ca7e439e9394119e0524dff18a43e5810f3', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2025-07-12 16:10:13.843183 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a9ad80b346b1b0fa9c0105c282582ea0f36e5a1cfc19b52f71402de66bd8d8ad', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2025-07-12 16:10:13.843212 | orchestrator | ok: [testbed-node-4] => (item={'id': '702e3ec136e7c55048e755c4107b8971bb643768366ac7363b182baaf4f4a9ec', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-07-12 16:10:13.843229 | orchestrator | ok: [testbed-node-4] => (item={'id': 'c3a27a9d3faf7786e5ec3022ddc880f5755507bb9fcd931278bec53af116aced', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-07-12 16:10:13.843241 | orchestrator | skipping: [testbed-node-4] => (item={'id': '770697bfe49fe16122a796091e8b476f458b6cc4569587ff41783bf7c5da1602', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-07-12 16:10:13.843270 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fb44594cd954887a0360393b4b6cad32b34fd5c416060cb4a52e520007c3d124', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-07-12 16:10:13.843282 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4056dd796c29e44bf358be7051377b8eea8ccc12c907bfe7076e68bd9f7ebe4c', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-07-12 16:10:13.843293 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b3436ec0163d9c2c67df71aace35d065ebd60b2b629f2ddc7adcf088d1becd58', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2025-07-12 16:10:13.843312 | orchestrator | skipping: [testbed-node-4] => (item={'id': '62b71211c7dad6b84c6d933a5fe410f48e8825c53bb5214d020da2cb5f84fc89', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 29 minutes'})  2025-07-12 16:10:13.843324 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'da7cb513eb9a671dbc80ea475e47309fb2469360e94a7445c77188003eadbca2', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2025-07-12 16:10:13.843335 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6d9cd393fd2bd296f98b8f369b3f601e0d9c9634b5f17502d7105c6f928be2eb', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-07-12 16:10:13.843346 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0196f2732022daedb1ae7c2ca8717a5842d4799d82511a2fe4305d79b85b0ea7', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-07-12 16:10:13.843357 | orchestrator | skipping: [testbed-node-5] => (item={'id': '305899dd6fcc03550d01b04da330f58b22a979c9c1940f376ceff1ac73cfe567', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-07-12 16:10:13.843369 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'eeecc1a127b117f0b29da9dd0d0c2eeb36d4c74c899e4ccd52f228e130c283de', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-07-12 16:10:13.843380 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'dc00a9a775a8d1ee4b31627dcc0eedaa47a57586b1fc461cdb1928a8ed686cec', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-07-12 16:10:13.843391 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2e568e8203fd4fca5265e7cc89e0d1783ab5be11ce5c411373723080cea71026', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-07-12 16:10:13.843402 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0f52d5152e2a7416284da4fe74ff49708513b7a9215706267bf28a1262fc18e4', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-07-12 16:10:13.843413 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fb6552de8d8ecb2f54a320e42f7c8cdeffeeceeb6456c0e92a1f649df3d40e5d', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-07-12 16:10:13.843430 | orchestrator | skipping: [testbed-node-5] => (item={'id': '11518e812fcf3529ef7f61cedbcfef714488490d89352953eb15ab2ec179b3b4', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-07-12 16:10:13.843444 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f78b2a3ff8257e584e18cd0d1beb0d276bbb3ca2c32bf1d4ecd244f4f4659065', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-07-12 16:10:13.843464 | orchestrator | skipping: [testbed-node-5] => (item={'id': '05b825e6a90086edda5f3a67968092f5e393b353393a05facfb8c235031c03d7', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2025-07-12 16:10:21.103605 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd284d63ae67b61663b79132a280ea88056ac3613906721d6be587e2b340fec1f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2025-07-12 16:10:21.103720 | orchestrator | ok: [testbed-node-5] => (item={'id': 'a2cfe60b431f0b19c555e679e7f45d5d62ee5008103745a8d33a8cda795dbcdf', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-07-12 16:10:21.103738 | orchestrator | ok: [testbed-node-5] => (item={'id': '141a670a65d4c1afe3f248d690ea28a64d5217447b6dc2b168436b93b82bc462', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-07-12 16:10:21.103750 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c52171c3d259fc541b3dc47a079e2bbbe984799bc86dba1c81128ebfd5ab9a4b', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-07-12 16:10:21.103762 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ec70b32110476f00bf81afc11d278911c860d0dd792d92fabbd5c95db75c23ef', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-07-12 16:10:21.103776 | orchestrator | skipping: [testbed-node-5] => (item={'id': '14913a2fd09d777080994c6ca16c1c68148da3752f7f8d07fc9491370e3253ab', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-07-12 16:10:21.103787 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5a2a9f2ffd3820296a3d8dff4af356213ac147100388aa0d7c16cd28b820dc36', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2025-07-12 16:10:21.103798 | orchestrator | skipping: [testbed-node-5] => (item={'id': '76a7b911f9b069dbe17eae5c3df7f7d8ddc665c94fada73caa45b369c1682056', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 29 minutes'})  2025-07-12 16:10:21.103810 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f9883de193baa8d2752c4d2750ab6facf89d08cf6d986298fa3ec7299bc24f46', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2025-07-12 16:10:21.103820 | orchestrator | 2025-07-12 16:10:21.103833 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-07-12 16:10:21.103845 | orchestrator | Saturday 12 July 2025 16:10:13 +0000 (0:00:00.469) 0:00:04.661 ********* 2025-07-12 16:10:21.103856 | orchestrator | ok: [testbed-node-3] 2025-07-12 16:10:21.103868 | orchestrator | ok: [testbed-node-4] 2025-07-12 16:10:21.103879 | orchestrator | ok: [testbed-node-5] 2025-07-12 16:10:21.103890 | orchestrator | 2025-07-12 16:10:21.103901 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-07-12 16:10:21.103912 | orchestrator | Saturday 12 July 2025 16:10:14 +0000 (0:00:00.287) 0:00:04.948 ********* 2025-07-12 16:10:21.103922 | orchestrator | skipping: [testbed-node-3] 2025-07-12 16:10:21.103934 | orchestrator | skipping: [testbed-node-4] 2025-07-12 16:10:21.103945 | orchestrator | skipping: [testbed-node-5] 2025-07-12 16:10:21.103955 | orchestrator | 2025-07-12 16:10:21.103966 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-07-12 16:10:21.103977 | orchestrator | Saturday 12 July 2025 16:10:14 +0000 (0:00:00.309) 0:00:05.258 ********* 2025-07-12 16:10:21.103988 | orchestrator | ok: [testbed-node-3] 2025-07-12 16:10:21.103999 | orchestrator | ok: [testbed-node-4] 2025-07-12 16:10:21.104009 | orchestrator | ok: [testbed-node-5] 2025-07-12 16:10:21.104149 | orchestrator | 2025-07-12 16:10:21.104163 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-12 16:10:21.104175 | orchestrator | Saturday 12 July 2025 16:10:15 +0000 (0:00:00.493) 0:00:05.751 ********* 2025-07-12 16:10:21.104201 | orchestrator | ok: [testbed-node-3] 2025-07-12 16:10:21.104213 | orchestrator | ok: [testbed-node-4] 2025-07-12 16:10:21.104225 | orchestrator | ok: [testbed-node-5] 2025-07-12 16:10:21.104237 | orchestrator | 2025-07-12 16:10:21.104248 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-07-12 16:10:21.104260 | orchestrator | Saturday 12 July 2025 16:10:15 +0000 (0:00:00.280) 0:00:06.031 ********* 2025-07-12 16:10:21.104272 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-07-12 16:10:21.104286 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-07-12 16:10:21.104299 | orchestrator | skipping: [testbed-node-3] 2025-07-12 16:10:21.104311 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-07-12 16:10:21.104324 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-07-12 16:10:21.104354 | orchestrator | skipping: [testbed-node-4] 2025-07-12 16:10:21.104367 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-07-12 16:10:21.104379 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-07-12 16:10:21.104391 | orchestrator | skipping: [testbed-node-5] 2025-07-12 16:10:21.104403 | orchestrator | 2025-07-12 16:10:21.104416 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-07-12 16:10:21.104428 | orchestrator | Saturday 12 July 2025 16:10:15 +0000 (0:00:00.306) 0:00:06.338 ********* 2025-07-12 16:10:21.104440 | orchestrator | ok: [testbed-node-3] 2025-07-12 16:10:21.104451 | orchestrator | ok: [testbed-node-4] 2025-07-12 16:10:21.104464 | orchestrator | ok: [testbed-node-5] 2025-07-12 16:10:21.104477 | orchestrator | 2025-07-12 16:10:21.104489 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-07-12 16:10:21.104500 | orchestrator | Saturday 12 July 2025 16:10:15 +0000 (0:00:00.291) 0:00:06.629 ********* 2025-07-12 16:10:21.104510 | orchestrator | skipping: [testbed-node-3] 2025-07-12 16:10:21.104521 | orchestrator | skipping: [testbed-node-4] 2025-07-12 16:10:21.104532 | orchestrator | skipping: [testbed-node-5] 2025-07-12 16:10:21.104542 | orchestrator | 2025-07-12 16:10:21.104553 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-07-12 16:10:21.104564 | orchestrator | Saturday 12 July 2025 16:10:16 +0000 (0:00:00.457) 0:00:07.086 ********* 2025-07-12 16:10:21.104574 | orchestrator | skipping: [testbed-node-3] 2025-07-12 16:10:21.104585 | orchestrator | skipping: [testbed-node-4] 2025-07-12 16:10:21.104595 | orchestrator | skipping: [testbed-node-5] 2025-07-12 16:10:21.104606 | orchestrator | 2025-07-12 16:10:21.104617 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-07-12 16:10:21.104627 | orchestrator | Saturday 12 July 2025 16:10:16 +0000 (0:00:00.289) 0:00:07.376 ********* 2025-07-12 16:10:21.104638 | orchestrator | ok: [testbed-node-3] 2025-07-12 16:10:21.104648 | orchestrator | ok: [testbed-node-4] 2025-07-12 16:10:21.104659 | orchestrator | ok: [testbed-node-5] 2025-07-12 16:10:21.104670 | orchestrator | 2025-07-12 16:10:21.104680 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-12 16:10:21.104691 | orchestrator | Saturday 12 July 2025 16:10:16 +0000 (0:00:00.275) 0:00:07.651 ********* 2025-07-12 16:10:21.104702 | orchestrator | skipping: [testbed-node-3] 2025-07-12 16:10:21.104712 | orchestrator | 2025-07-12 16:10:21.104723 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-12 16:10:21.104733 | orchestrator | Saturday 12 July 2025 16:10:17 +0000 (0:00:00.227) 0:00:07.878 ********* 2025-07-12 16:10:21.104744 | orchestrator | skipping: [testbed-node-3] 2025-07-12 16:10:21.104765 | orchestrator | 2025-07-12 16:10:21.104776 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-12 16:10:21.104787 | orchestrator | Saturday 12 July 2025 16:10:17 +0000 (0:00:00.262) 0:00:08.140 ********* 2025-07-12 16:10:21.104797 | orchestrator | skipping: [testbed-node-3] 2025-07-12 16:10:21.104809 | orchestrator | 2025-07-12 16:10:21.104819 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 16:10:21.104830 | orchestrator | Saturday 12 July 2025 16:10:17 +0000 (0:00:00.220) 0:00:08.361 ********* 2025-07-12 16:10:21.104841 | orchestrator | 2025-07-12 16:10:21.104852 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 16:10:21.104863 | orchestrator | Saturday 12 July 2025 16:10:17 +0000 (0:00:00.064) 0:00:08.426 ********* 2025-07-12 16:10:21.104952 | orchestrator | 2025-07-12 16:10:21.104975 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 16:10:21.104993 | orchestrator | Saturday 12 July 2025 16:10:17 +0000 (0:00:00.061) 0:00:08.488 ********* 2025-07-12 16:10:21.105041 | orchestrator | 2025-07-12 16:10:21.105059 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-12 16:10:21.105072 | orchestrator | Saturday 12 July 2025 16:10:17 +0000 (0:00:00.216) 0:00:08.704 ********* 2025-07-12 16:10:21.105091 | orchestrator | skipping: [testbed-node-3] 2025-07-12 16:10:21.105108 | orchestrator | 2025-07-12 16:10:21.105127 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-07-12 16:10:21.105144 | orchestrator | Saturday 12 July 2025 16:10:18 +0000 (0:00:00.244) 0:00:08.949 ********* 2025-07-12 16:10:21.105162 | orchestrator | skipping: [testbed-node-3] 2025-07-12 16:10:21.105181 | orchestrator | 2025-07-12 16:10:21.105199 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-12 16:10:21.105218 | orchestrator | Saturday 12 July 2025 16:10:18 +0000 (0:00:00.233) 0:00:09.182 ********* 2025-07-12 16:10:21.105236 | orchestrator | ok: [testbed-node-3] 2025-07-12 16:10:21.105255 | orchestrator | ok: [testbed-node-4] 2025-07-12 16:10:21.105272 | orchestrator | ok: [testbed-node-5] 2025-07-12 16:10:21.105291 | orchestrator | 2025-07-12 16:10:21.105308 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-07-12 16:10:21.105326 | orchestrator | Saturday 12 July 2025 16:10:18 +0000 (0:00:00.281) 0:00:09.464 ********* 2025-07-12 16:10:21.105344 | orchestrator | ok: [testbed-node-3] 2025-07-12 16:10:21.105362 | orchestrator | 2025-07-12 16:10:21.105382 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-07-12 16:10:21.105401 | orchestrator | Saturday 12 July 2025 16:10:18 +0000 (0:00:00.217) 0:00:09.682 ********* 2025-07-12 16:10:21.105420 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-12 16:10:21.105440 | orchestrator | 2025-07-12 16:10:21.105459 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-07-12 16:10:21.105477 | orchestrator | Saturday 12 July 2025 16:10:20 +0000 (0:00:01.602) 0:00:11.285 ********* 2025-07-12 16:10:21.105496 | orchestrator | ok: [testbed-node-3] 2025-07-12 16:10:21.105510 | orchestrator | 2025-07-12 16:10:21.105521 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-07-12 16:10:21.105531 | orchestrator | Saturday 12 July 2025 16:10:20 +0000 (0:00:00.125) 0:00:11.410 ********* 2025-07-12 16:10:21.105542 | orchestrator | ok: [testbed-node-3] 2025-07-12 16:10:21.105552 | orchestrator | 2025-07-12 16:10:21.105563 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-07-12 16:10:21.105574 | orchestrator | Saturday 12 July 2025 16:10:20 +0000 (0:00:00.299) 0:00:11.709 ********* 2025-07-12 16:10:21.105596 | orchestrator | skipping: [testbed-node-3] 2025-07-12 16:10:33.410618 | orchestrator | 2025-07-12 16:10:33.410734 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-07-12 16:10:33.410751 | orchestrator | Saturday 12 July 2025 16:10:21 +0000 (0:00:00.107) 0:00:11.816 ********* 2025-07-12 16:10:33.410763 | orchestrator | ok: [testbed-node-3] 2025-07-12 16:10:33.410776 | orchestrator | 2025-07-12 16:10:33.410788 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-12 16:10:33.410820 | orchestrator | Saturday 12 July 2025 16:10:21 +0000 (0:00:00.122) 0:00:11.938 ********* 2025-07-12 16:10:33.410831 | orchestrator | ok: [testbed-node-3] 2025-07-12 16:10:33.410842 | orchestrator | ok: [testbed-node-4] 2025-07-12 16:10:33.410853 | orchestrator | ok: [testbed-node-5] 2025-07-12 16:10:33.410864 | orchestrator | 2025-07-12 16:10:33.410875 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-07-12 16:10:33.410886 | orchestrator | Saturday 12 July 2025 16:10:21 +0000 (0:00:00.466) 0:00:12.405 ********* 2025-07-12 16:10:33.410897 | orchestrator | changed: [testbed-node-3] 2025-07-12 16:10:33.410909 | orchestrator | changed: [testbed-node-4] 2025-07-12 16:10:33.410919 | orchestrator | changed: [testbed-node-5] 2025-07-12 16:10:33.410930 | orchestrator | 2025-07-12 16:10:33.410941 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-07-12 16:10:33.410952 | orchestrator | Saturday 12 July 2025 16:10:24 +0000 (0:00:02.401) 0:00:14.806 ********* 2025-07-12 16:10:33.410963 | orchestrator | ok: [testbed-node-3] 2025-07-12 16:10:33.410974 | orchestrator | ok: [testbed-node-4] 2025-07-12 16:10:33.410985 | orchestrator | ok: [testbed-node-5] 2025-07-12 16:10:33.410995 | orchestrator | 2025-07-12 16:10:33.411048 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-07-12 16:10:33.411060 | orchestrator | Saturday 12 July 2025 16:10:24 +0000 (0:00:00.302) 0:00:15.109 ********* 2025-07-12 16:10:33.411071 | orchestrator | ok: [testbed-node-3] 2025-07-12 16:10:33.411082 | orchestrator | ok: [testbed-node-4] 2025-07-12 16:10:33.411139 | orchestrator | ok: [testbed-node-5] 2025-07-12 16:10:33.411152 | orchestrator | 2025-07-12 16:10:33.411165 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-07-12 16:10:33.411177 | orchestrator | Saturday 12 July 2025 16:10:24 +0000 (0:00:00.487) 0:00:15.597 ********* 2025-07-12 16:10:33.411190 | orchestrator | skipping: [testbed-node-3] 2025-07-12 16:10:33.411202 | orchestrator | skipping: [testbed-node-4] 2025-07-12 16:10:33.411214 | orchestrator | skipping: [testbed-node-5] 2025-07-12 16:10:33.411227 | orchestrator | 2025-07-12 16:10:33.411239 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-07-12 16:10:33.411251 | orchestrator | Saturday 12 July 2025 16:10:25 +0000 (0:00:00.486) 0:00:16.083 ********* 2025-07-12 16:10:33.411263 | orchestrator | ok: [testbed-node-3] 2025-07-12 16:10:33.411275 | orchestrator | ok: [testbed-node-4] 2025-07-12 16:10:33.411287 | orchestrator | ok: [testbed-node-5] 2025-07-12 16:10:33.411298 | orchestrator | 2025-07-12 16:10:33.411311 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-07-12 16:10:33.411323 | orchestrator | Saturday 12 July 2025 16:10:25 +0000 (0:00:00.309) 0:00:16.393 ********* 2025-07-12 16:10:33.411336 | orchestrator | skipping: [testbed-node-3] 2025-07-12 16:10:33.411349 | orchestrator | skipping: [testbed-node-4] 2025-07-12 16:10:33.411361 | orchestrator | skipping: [testbed-node-5] 2025-07-12 16:10:33.411373 | orchestrator | 2025-07-12 16:10:33.411385 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-07-12 16:10:33.411397 | orchestrator | Saturday 12 July 2025 16:10:25 +0000 (0:00:00.276) 0:00:16.670 ********* 2025-07-12 16:10:33.411409 | orchestrator | skipping: [testbed-node-3] 2025-07-12 16:10:33.411421 | orchestrator | skipping: [testbed-node-4] 2025-07-12 16:10:33.411433 | orchestrator | skipping: [testbed-node-5] 2025-07-12 16:10:33.411446 | orchestrator | 2025-07-12 16:10:33.411458 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-12 16:10:33.411470 | orchestrator | Saturday 12 July 2025 16:10:26 +0000 (0:00:00.263) 0:00:16.933 ********* 2025-07-12 16:10:33.411482 | orchestrator | ok: [testbed-node-3] 2025-07-12 16:10:33.411494 | orchestrator | ok: [testbed-node-4] 2025-07-12 16:10:33.411505 | orchestrator | ok: [testbed-node-5] 2025-07-12 16:10:33.411516 | orchestrator | 2025-07-12 16:10:33.411527 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-07-12 16:10:33.411538 | orchestrator | Saturday 12 July 2025 16:10:26 +0000 (0:00:00.710) 0:00:17.643 ********* 2025-07-12 16:10:33.411556 | orchestrator | ok: [testbed-node-3] 2025-07-12 16:10:33.411567 | orchestrator | ok: [testbed-node-4] 2025-07-12 16:10:33.411577 | orchestrator | ok: [testbed-node-5] 2025-07-12 16:10:33.411588 | orchestrator | 2025-07-12 16:10:33.411599 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-07-12 16:10:33.411609 | orchestrator | Saturday 12 July 2025 16:10:27 +0000 (0:00:00.462) 0:00:18.106 ********* 2025-07-12 16:10:33.411620 | orchestrator | ok: [testbed-node-3] 2025-07-12 16:10:33.411631 | orchestrator | ok: [testbed-node-4] 2025-07-12 16:10:33.411641 | orchestrator | ok: [testbed-node-5] 2025-07-12 16:10:33.411652 | orchestrator | 2025-07-12 16:10:33.411663 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-07-12 16:10:33.411674 | orchestrator | Saturday 12 July 2025 16:10:27 +0000 (0:00:00.267) 0:00:18.374 ********* 2025-07-12 16:10:33.411684 | orchestrator | skipping: [testbed-node-3] 2025-07-12 16:10:33.411700 | orchestrator | skipping: [testbed-node-4] 2025-07-12 16:10:33.411711 | orchestrator | skipping: [testbed-node-5] 2025-07-12 16:10:33.411722 | orchestrator | 2025-07-12 16:10:33.411733 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-07-12 16:10:33.411743 | orchestrator | Saturday 12 July 2025 16:10:27 +0000 (0:00:00.279) 0:00:18.653 ********* 2025-07-12 16:10:33.411754 | orchestrator | ok: [testbed-node-3] 2025-07-12 16:10:33.411765 | orchestrator | ok: [testbed-node-4] 2025-07-12 16:10:33.411776 | orchestrator | ok: [testbed-node-5] 2025-07-12 16:10:33.411786 | orchestrator | 2025-07-12 16:10:33.411797 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-07-12 16:10:33.411807 | orchestrator | Saturday 12 July 2025 16:10:28 +0000 (0:00:00.480) 0:00:19.134 ********* 2025-07-12 16:10:33.411818 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 16:10:33.411829 | orchestrator | 2025-07-12 16:10:33.411840 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-07-12 16:10:33.411851 | orchestrator | Saturday 12 July 2025 16:10:28 +0000 (0:00:00.249) 0:00:19.383 ********* 2025-07-12 16:10:33.411862 | orchestrator | skipping: [testbed-node-3] 2025-07-12 16:10:33.411873 | orchestrator | 2025-07-12 16:10:33.411902 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-12 16:10:33.411914 | orchestrator | Saturday 12 July 2025 16:10:28 +0000 (0:00:00.260) 0:00:19.644 ********* 2025-07-12 16:10:33.411924 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 16:10:33.411935 | orchestrator | 2025-07-12 16:10:33.411946 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-12 16:10:33.411957 | orchestrator | Saturday 12 July 2025 16:10:30 +0000 (0:00:01.519) 0:00:21.163 ********* 2025-07-12 16:10:33.411968 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 16:10:33.411979 | orchestrator | 2025-07-12 16:10:33.411990 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-12 16:10:33.412000 | orchestrator | Saturday 12 July 2025 16:10:30 +0000 (0:00:00.276) 0:00:21.439 ********* 2025-07-12 16:10:33.412050 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 16:10:33.412061 | orchestrator | 2025-07-12 16:10:33.412072 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 16:10:33.412083 | orchestrator | Saturday 12 July 2025 16:10:30 +0000 (0:00:00.236) 0:00:21.676 ********* 2025-07-12 16:10:33.412094 | orchestrator | 2025-07-12 16:10:33.412105 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 16:10:33.412115 | orchestrator | Saturday 12 July 2025 16:10:31 +0000 (0:00:00.064) 0:00:21.740 ********* 2025-07-12 16:10:33.412126 | orchestrator | 2025-07-12 16:10:33.412137 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 16:10:33.412148 | orchestrator | Saturday 12 July 2025 16:10:31 +0000 (0:00:00.062) 0:00:21.803 ********* 2025-07-12 16:10:33.412158 | orchestrator | 2025-07-12 16:10:33.412169 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-07-12 16:10:33.412187 | orchestrator | Saturday 12 July 2025 16:10:31 +0000 (0:00:00.066) 0:00:21.869 ********* 2025-07-12 16:10:33.412198 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 16:10:33.412209 | orchestrator | 2025-07-12 16:10:33.412219 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-12 16:10:33.412230 | orchestrator | Saturday 12 July 2025 16:10:32 +0000 (0:00:01.442) 0:00:23.312 ********* 2025-07-12 16:10:33.412241 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-07-12 16:10:33.412252 | orchestrator |  "msg": [ 2025-07-12 16:10:33.412263 | orchestrator |  "Validator run completed.", 2025-07-12 16:10:33.412274 | orchestrator |  "You can find the report file here:", 2025-07-12 16:10:33.412285 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-07-12T16:10:10+00:00-report.json", 2025-07-12 16:10:33.412297 | orchestrator |  "on the following host:", 2025-07-12 16:10:33.412308 | orchestrator |  "testbed-manager" 2025-07-12 16:10:33.412320 | orchestrator |  ] 2025-07-12 16:10:33.412331 | orchestrator | } 2025-07-12 16:10:33.412342 | orchestrator | 2025-07-12 16:10:33.412353 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 16:10:33.412365 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-07-12 16:10:33.412377 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-12 16:10:33.412388 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-12 16:10:33.412399 | orchestrator | 2025-07-12 16:10:33.412410 | orchestrator | 2025-07-12 16:10:33.412421 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 16:10:33.412432 | orchestrator | Saturday 12 July 2025 16:10:33 +0000 (0:00:00.785) 0:00:24.097 ********* 2025-07-12 16:10:33.412442 | orchestrator | =============================================================================== 2025-07-12 16:10:33.412453 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.40s 2025-07-12 16:10:33.412464 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.60s 2025-07-12 16:10:33.412474 | orchestrator | Aggregate test results step one ----------------------------------------- 1.52s 2025-07-12 16:10:33.412485 | orchestrator | Write report file ------------------------------------------------------- 1.44s 2025-07-12 16:10:33.412496 | orchestrator | Create report output directory ------------------------------------------ 0.95s 2025-07-12 16:10:33.412506 | orchestrator | Print report file information ------------------------------------------- 0.79s 2025-07-12 16:10:33.412517 | orchestrator | Prepare test data ------------------------------------------------------- 0.71s 2025-07-12 16:10:33.412533 | orchestrator | Get timestamp for report file ------------------------------------------- 0.63s 2025-07-12 16:10:33.412544 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.51s 2025-07-12 16:10:33.412555 | orchestrator | Set test result to passed if count matches ------------------------------ 0.49s 2025-07-12 16:10:33.412565 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.49s 2025-07-12 16:10:33.412576 | orchestrator | Fail if count of encrypted OSDs does not match -------------------------- 0.49s 2025-07-12 16:10:33.412587 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.48s 2025-07-12 16:10:33.412598 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.47s 2025-07-12 16:10:33.412608 | orchestrator | Prepare test data ------------------------------------------------------- 0.47s 2025-07-12 16:10:33.412619 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.46s 2025-07-12 16:10:33.412638 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.46s 2025-07-12 16:10:33.662814 | orchestrator | Prepare test data ------------------------------------------------------- 0.44s 2025-07-12 16:10:33.662909 | orchestrator | Flush handlers ---------------------------------------------------------- 0.34s 2025-07-12 16:10:33.662924 | orchestrator | Calculate OSD devices for each host ------------------------------------- 0.33s 2025-07-12 16:10:33.940178 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-07-12 16:10:33.946272 | orchestrator | + set -e 2025-07-12 16:10:33.946325 | orchestrator | + source /opt/manager-vars.sh 2025-07-12 16:10:33.946341 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-12 16:10:33.946352 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-12 16:10:33.946362 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-12 16:10:33.946373 | orchestrator | ++ CEPH_VERSION=reef 2025-07-12 16:10:33.946384 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-12 16:10:33.946396 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-12 16:10:33.946407 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-07-12 16:10:33.946418 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-07-12 16:10:33.946429 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-12 16:10:33.946439 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-12 16:10:33.946450 | orchestrator | ++ export ARA=false 2025-07-12 16:10:33.946461 | orchestrator | ++ ARA=false 2025-07-12 16:10:33.946471 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-12 16:10:33.946482 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-12 16:10:33.946493 | orchestrator | ++ export TEMPEST=false 2025-07-12 16:10:33.946503 | orchestrator | ++ TEMPEST=false 2025-07-12 16:10:33.946514 | orchestrator | ++ export IS_ZUUL=true 2025-07-12 16:10:33.946525 | orchestrator | ++ IS_ZUUL=true 2025-07-12 16:10:33.946536 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.204 2025-07-12 16:10:33.946547 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.204 2025-07-12 16:10:33.946557 | orchestrator | ++ export EXTERNAL_API=false 2025-07-12 16:10:33.946567 | orchestrator | ++ EXTERNAL_API=false 2025-07-12 16:10:33.946578 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-12 16:10:33.946588 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-12 16:10:33.946599 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-12 16:10:33.946609 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-12 16:10:33.946620 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-12 16:10:33.946630 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-12 16:10:33.946641 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-07-12 16:10:33.946651 | orchestrator | + source /etc/os-release 2025-07-12 16:10:33.946662 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.2 LTS' 2025-07-12 16:10:33.946672 | orchestrator | ++ NAME=Ubuntu 2025-07-12 16:10:33.946683 | orchestrator | ++ VERSION_ID=24.04 2025-07-12 16:10:33.946694 | orchestrator | ++ VERSION='24.04.2 LTS (Noble Numbat)' 2025-07-12 16:10:33.946704 | orchestrator | ++ VERSION_CODENAME=noble 2025-07-12 16:10:33.946715 | orchestrator | ++ ID=ubuntu 2025-07-12 16:10:33.946725 | orchestrator | ++ ID_LIKE=debian 2025-07-12 16:10:33.946736 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-07-12 16:10:33.946747 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-07-12 16:10:33.946757 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-07-12 16:10:33.946768 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-07-12 16:10:33.946780 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-07-12 16:10:33.946790 | orchestrator | ++ LOGO=ubuntu-logo 2025-07-12 16:10:33.946801 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-07-12 16:10:33.946816 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-07-12 16:10:33.946835 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-07-12 16:10:33.975809 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-07-12 16:10:55.635555 | orchestrator | 2025-07-12 16:10:55.635664 | orchestrator | # Status of Elasticsearch 2025-07-12 16:10:55.635681 | orchestrator | 2025-07-12 16:10:55.635693 | orchestrator | + pushd /opt/configuration/contrib 2025-07-12 16:10:55.635706 | orchestrator | + echo 2025-07-12 16:10:55.635718 | orchestrator | + echo '# Status of Elasticsearch' 2025-07-12 16:10:55.635729 | orchestrator | + echo 2025-07-12 16:10:55.635740 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-07-12 16:10:55.824593 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-07-12 16:10:55.824735 | orchestrator | 2025-07-12 16:10:55.824760 | orchestrator | # Status of MariaDB 2025-07-12 16:10:55.824779 | orchestrator | 2025-07-12 16:10:55.824796 | orchestrator | + echo 2025-07-12 16:10:55.824814 | orchestrator | + echo '# Status of MariaDB' 2025-07-12 16:10:55.824831 | orchestrator | + echo 2025-07-12 16:10:55.824847 | orchestrator | + MARIADB_USER=root_shard_0 2025-07-12 16:10:55.824864 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-07-12 16:10:55.886730 | orchestrator | Reading package lists... 2025-07-12 16:10:56.180967 | orchestrator | Building dependency tree... 2025-07-12 16:10:56.181411 | orchestrator | Reading state information... 2025-07-12 16:10:56.533984 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-07-12 16:10:56.534216 | orchestrator | bc set to manually installed. 2025-07-12 16:10:56.534231 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-07-12 16:10:57.186451 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-07-12 16:10:57.186543 | orchestrator | 2025-07-12 16:10:57.186559 | orchestrator | # Status of Prometheus 2025-07-12 16:10:57.186571 | orchestrator | 2025-07-12 16:10:57.186583 | orchestrator | + echo 2025-07-12 16:10:57.186594 | orchestrator | + echo '# Status of Prometheus' 2025-07-12 16:10:57.186605 | orchestrator | + echo 2025-07-12 16:10:57.186616 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-07-12 16:10:57.247425 | orchestrator | Unauthorized 2025-07-12 16:10:57.250252 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-07-12 16:10:57.308553 | orchestrator | Unauthorized 2025-07-12 16:10:57.311257 | orchestrator | 2025-07-12 16:10:57.311286 | orchestrator | # Status of RabbitMQ 2025-07-12 16:10:57.311299 | orchestrator | 2025-07-12 16:10:57.311310 | orchestrator | + echo 2025-07-12 16:10:57.311322 | orchestrator | + echo '# Status of RabbitMQ' 2025-07-12 16:10:57.311333 | orchestrator | + echo 2025-07-12 16:10:57.311364 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-07-12 16:10:57.755751 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-07-12 16:10:57.763741 | orchestrator | 2025-07-12 16:10:57.763776 | orchestrator | # Status of Redis 2025-07-12 16:10:57.763789 | orchestrator | 2025-07-12 16:10:57.763801 | orchestrator | + echo 2025-07-12 16:10:57.763813 | orchestrator | + echo '# Status of Redis' 2025-07-12 16:10:57.763826 | orchestrator | + echo 2025-07-12 16:10:57.763839 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-07-12 16:10:57.770806 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001862s;;;0.000000;10.000000 2025-07-12 16:10:57.771421 | orchestrator | + popd 2025-07-12 16:10:57.771447 | orchestrator | 2025-07-12 16:10:57.771459 | orchestrator | # Create backup of MariaDB database 2025-07-12 16:10:57.771471 | orchestrator | 2025-07-12 16:10:57.771482 | orchestrator | + echo 2025-07-12 16:10:57.771494 | orchestrator | + echo '# Create backup of MariaDB database' 2025-07-12 16:10:57.771506 | orchestrator | + echo 2025-07-12 16:10:57.771517 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-07-12 16:10:59.563229 | orchestrator | 2025-07-12 16:10:59 | INFO  | Task e4ae8d6a-5001-4c8c-9afc-2591d57f55e5 (mariadb_backup) was prepared for execution. 2025-07-12 16:10:59.563345 | orchestrator | 2025-07-12 16:10:59 | INFO  | It takes a moment until task e4ae8d6a-5001-4c8c-9afc-2591d57f55e5 (mariadb_backup) has been started and output is visible here. 2025-07-12 16:12:45.966588 | orchestrator | 2025-07-12 16:12:45.966710 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 16:12:45.966726 | orchestrator | 2025-07-12 16:12:45.966738 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 16:12:45.966750 | orchestrator | Saturday 12 July 2025 16:11:03 +0000 (0:00:00.173) 0:00:00.173 ********* 2025-07-12 16:12:45.966762 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:12:45.966774 | orchestrator | ok: [testbed-node-1] 2025-07-12 16:12:45.966785 | orchestrator | ok: [testbed-node-2] 2025-07-12 16:12:45.966820 | orchestrator | 2025-07-12 16:12:45.966832 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 16:12:45.966843 | orchestrator | Saturday 12 July 2025 16:11:03 +0000 (0:00:00.301) 0:00:00.474 ********* 2025-07-12 16:12:45.966853 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-07-12 16:12:45.966865 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-07-12 16:12:45.966875 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-07-12 16:12:45.966886 | orchestrator | 2025-07-12 16:12:45.966897 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-07-12 16:12:45.966907 | orchestrator | 2025-07-12 16:12:45.966957 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-07-12 16:12:45.966968 | orchestrator | Saturday 12 July 2025 16:11:04 +0000 (0:00:00.545) 0:00:01.019 ********* 2025-07-12 16:12:45.966979 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 16:12:45.966990 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-07-12 16:12:45.967001 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-07-12 16:12:45.967011 | orchestrator | 2025-07-12 16:12:45.967022 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-12 16:12:45.967033 | orchestrator | Saturday 12 July 2025 16:11:04 +0000 (0:00:00.372) 0:00:01.392 ********* 2025-07-12 16:12:45.967045 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 16:12:45.967057 | orchestrator | 2025-07-12 16:12:45.967068 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-07-12 16:12:45.967079 | orchestrator | Saturday 12 July 2025 16:11:05 +0000 (0:00:00.492) 0:00:01.884 ********* 2025-07-12 16:12:45.967090 | orchestrator | ok: [testbed-node-1] 2025-07-12 16:12:45.967101 | orchestrator | ok: [testbed-node-0] 2025-07-12 16:12:45.967111 | orchestrator | ok: [testbed-node-2] 2025-07-12 16:12:45.967122 | orchestrator | 2025-07-12 16:12:45.967134 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-07-12 16:12:45.967145 | orchestrator | Saturday 12 July 2025 16:11:08 +0000 (0:00:03.045) 0:00:04.930 ********* 2025-07-12 16:12:45.967158 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-07-12 16:12:45.967170 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-07-12 16:12:45.967183 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-07-12 16:12:45.967195 | orchestrator | mariadb_bootstrap_restart 2025-07-12 16:12:45.967208 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:12:45.967220 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:12:45.967232 | orchestrator | changed: [testbed-node-0] 2025-07-12 16:12:45.967245 | orchestrator | 2025-07-12 16:12:45.967257 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-07-12 16:12:45.967269 | orchestrator | skipping: no hosts matched 2025-07-12 16:12:45.967280 | orchestrator | 2025-07-12 16:12:45.967292 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-07-12 16:12:45.967304 | orchestrator | skipping: no hosts matched 2025-07-12 16:12:45.967316 | orchestrator | 2025-07-12 16:12:45.967328 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-07-12 16:12:45.967341 | orchestrator | skipping: no hosts matched 2025-07-12 16:12:45.967353 | orchestrator | 2025-07-12 16:12:45.967364 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-07-12 16:12:45.967376 | orchestrator | 2025-07-12 16:12:45.967389 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-07-12 16:12:45.967401 | orchestrator | Saturday 12 July 2025 16:12:44 +0000 (0:01:36.768) 0:01:41.698 ********* 2025-07-12 16:12:45.967413 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:12:45.967425 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:12:45.967438 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:12:45.967450 | orchestrator | 2025-07-12 16:12:45.967469 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-07-12 16:12:45.967482 | orchestrator | Saturday 12 July 2025 16:12:45 +0000 (0:00:00.298) 0:01:41.996 ********* 2025-07-12 16:12:45.967493 | orchestrator | skipping: [testbed-node-0] 2025-07-12 16:12:45.967503 | orchestrator | skipping: [testbed-node-1] 2025-07-12 16:12:45.967514 | orchestrator | skipping: [testbed-node-2] 2025-07-12 16:12:45.967524 | orchestrator | 2025-07-12 16:12:45.967535 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 16:12:45.967547 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 16:12:45.967559 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 16:12:45.967570 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 16:12:45.967581 | orchestrator | 2025-07-12 16:12:45.967591 | orchestrator | 2025-07-12 16:12:45.967602 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 16:12:45.967613 | orchestrator | Saturday 12 July 2025 16:12:45 +0000 (0:00:00.521) 0:01:42.517 ********* 2025-07-12 16:12:45.967623 | orchestrator | =============================================================================== 2025-07-12 16:12:45.967634 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 96.77s 2025-07-12 16:12:45.967661 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.05s 2025-07-12 16:12:45.967672 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.55s 2025-07-12 16:12:45.967683 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.52s 2025-07-12 16:12:45.967694 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.49s 2025-07-12 16:12:45.967705 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.37s 2025-07-12 16:12:45.967715 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2025-07-12 16:12:45.967726 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.30s 2025-07-12 16:12:46.305138 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-07-12 16:12:46.314537 | orchestrator | + set -e 2025-07-12 16:12:46.314604 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-12 16:12:46.314619 | orchestrator | ++ export INTERACTIVE=false 2025-07-12 16:12:46.314631 | orchestrator | ++ INTERACTIVE=false 2025-07-12 16:12:46.314719 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-12 16:12:46.314733 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-12 16:12:46.314753 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-07-12 16:12:46.315885 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-07-12 16:12:46.321859 | orchestrator | 2025-07-12 16:12:46.321898 | orchestrator | # OpenStack endpoints 2025-07-12 16:12:46.321943 | orchestrator | 2025-07-12 16:12:46.321956 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-07-12 16:12:46.321968 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-07-12 16:12:46.321978 | orchestrator | + export OS_CLOUD=admin 2025-07-12 16:12:46.321989 | orchestrator | + OS_CLOUD=admin 2025-07-12 16:12:46.322000 | orchestrator | + echo 2025-07-12 16:12:46.322011 | orchestrator | + echo '# OpenStack endpoints' 2025-07-12 16:12:46.322070 | orchestrator | + echo 2025-07-12 16:12:46.322082 | orchestrator | + openstack endpoint list 2025-07-12 16:12:49.868251 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-07-12 16:12:49.868361 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-07-12 16:12:49.868377 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-07-12 16:12:49.868417 | orchestrator | | 014f5f518e1c46b38ffba4d37d78a5c6 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-07-12 16:12:49.868429 | orchestrator | | 0ec38d5b27eb465bb9a48ccd83f0a250 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-07-12 16:12:49.868440 | orchestrator | | 2fc7d0588fc943c6b331a63298c829f0 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-07-12 16:12:49.868451 | orchestrator | | 3ae9d14368084902a19eecbbd5ff271b | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-07-12 16:12:49.868462 | orchestrator | | 3e6b459f06ff4d93827ae9d0580d7fdd | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-07-12 16:12:49.868473 | orchestrator | | 41a7409f7b574b40a8f10e7469326225 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-07-12 16:12:49.868484 | orchestrator | | 510a16a3bac2474aad46d924f5b69188 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-07-12 16:12:49.868494 | orchestrator | | 60b75e9d486143e6a2fd7ad8a46f2f3b | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-07-12 16:12:49.868519 | orchestrator | | 7b5b3225c2b842d5ab254519205248d4 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-07-12 16:12:49.868530 | orchestrator | | 873c4dc1c7ef433eb346f95c9c23ddc5 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-07-12 16:12:49.868541 | orchestrator | | 8925186c28e6487a86ea7181e624f3ab | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-07-12 16:12:49.868552 | orchestrator | | 921cf8b0fe244fc182621829d35c8af5 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-07-12 16:12:49.868562 | orchestrator | | 97a471abaa0e4cbbb29c4c167ebd84ab | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-07-12 16:12:49.868573 | orchestrator | | 9be2d531de3643f6980de073738d4ad2 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-07-12 16:12:49.868583 | orchestrator | | afecc292208b49d5bda86389fb72321a | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-07-12 16:12:49.868594 | orchestrator | | c33e606e4d2d440793cbd03c9fec6786 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-07-12 16:12:49.868605 | orchestrator | | c8461953400443e4b24568f301b0db59 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-07-12 16:12:49.868615 | orchestrator | | c888a7acb3f548669225aee5d62ed33a | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-07-12 16:12:49.868626 | orchestrator | | c91bd55ab1884b4297ea96d003e53ad2 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-07-12 16:12:49.868637 | orchestrator | | d299ff3f2f8c48d19a01963d6977f270 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-07-12 16:12:49.868672 | orchestrator | | dec29baa3f644df58285e364a8604f5d | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-07-12 16:12:49.868684 | orchestrator | | f5134cfa5e1e420cb65f9cd415003e0b | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-07-12 16:12:49.868695 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-07-12 16:12:50.081718 | orchestrator | 2025-07-12 16:12:50.081811 | orchestrator | # Cinder 2025-07-12 16:12:50.081824 | orchestrator | 2025-07-12 16:12:50.081836 | orchestrator | + echo 2025-07-12 16:12:50.081847 | orchestrator | + echo '# Cinder' 2025-07-12 16:12:50.081857 | orchestrator | + echo 2025-07-12 16:12:50.081867 | orchestrator | + openstack volume service list 2025-07-12 16:12:52.794322 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-07-12 16:12:52.794423 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-07-12 16:12:52.794437 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-07-12 16:12:52.794449 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-07-12T16:12:49.000000 | 2025-07-12 16:12:52.794460 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-07-12T16:12:49.000000 | 2025-07-12 16:12:52.794471 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-07-12T16:12:51.000000 | 2025-07-12 16:12:52.794482 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-07-12T16:12:43.000000 | 2025-07-12 16:12:52.794493 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-07-12T16:12:43.000000 | 2025-07-12 16:12:52.794504 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-07-12T16:12:45.000000 | 2025-07-12 16:12:52.794515 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-07-12T16:12:49.000000 | 2025-07-12 16:12:52.794525 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-07-12T16:12:49.000000 | 2025-07-12 16:12:52.794536 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-07-12T16:12:49.000000 | 2025-07-12 16:12:52.794547 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-07-12 16:12:53.040815 | orchestrator | 2025-07-12 16:12:53.040971 | orchestrator | # Neutron 2025-07-12 16:12:53.040989 | orchestrator | 2025-07-12 16:12:53.041001 | orchestrator | + echo 2025-07-12 16:12:53.041013 | orchestrator | + echo '# Neutron' 2025-07-12 16:12:53.041025 | orchestrator | + echo 2025-07-12 16:12:53.041037 | orchestrator | + openstack network agent list 2025-07-12 16:12:55.820709 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-07-12 16:12:55.820801 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-07-12 16:12:55.820811 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-07-12 16:12:55.820820 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-07-12 16:12:55.820827 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-07-12 16:12:55.820835 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-07-12 16:12:55.820842 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-07-12 16:12:55.820867 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-07-12 16:12:55.820875 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-07-12 16:12:55.820882 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-07-12 16:12:55.820889 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-07-12 16:12:55.820896 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-07-12 16:12:55.820956 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-07-12 16:12:56.058217 | orchestrator | + openstack network service provider list 2025-07-12 16:12:58.520122 | orchestrator | +---------------+------+---------+ 2025-07-12 16:12:58.520248 | orchestrator | | Service Type | Name | Default | 2025-07-12 16:12:58.520272 | orchestrator | +---------------+------+---------+ 2025-07-12 16:12:58.520292 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-07-12 16:12:58.520310 | orchestrator | +---------------+------+---------+ 2025-07-12 16:12:58.758344 | orchestrator | 2025-07-12 16:12:58.758429 | orchestrator | # Nova 2025-07-12 16:12:58.758443 | orchestrator | 2025-07-12 16:12:58.758455 | orchestrator | + echo 2025-07-12 16:12:58.758467 | orchestrator | + echo '# Nova' 2025-07-12 16:12:58.758479 | orchestrator | + echo 2025-07-12 16:12:58.758491 | orchestrator | + openstack compute service list 2025-07-12 16:13:01.544364 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-07-12 16:13:01.544491 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-07-12 16:13:01.544507 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-07-12 16:13:01.544519 | orchestrator | | ace52885-a95d-4763-a869-2812af5062af | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-07-12T16:12:54.000000 | 2025-07-12 16:13:01.544582 | orchestrator | | d5a6e106-6db7-404e-b29d-70b52effc6e5 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-07-12T16:12:58.000000 | 2025-07-12 16:13:01.544595 | orchestrator | | 25433875-6468-4f46-bf33-3fba7ca33c83 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-07-12T16:12:52.000000 | 2025-07-12 16:13:01.544607 | orchestrator | | f55e6cb2-a957-447f-95db-63c47fee8e8c | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-07-12T16:12:57.000000 | 2025-07-12 16:13:01.544618 | orchestrator | | 69f7ecce-6e9b-4842-a7a6-460774943b87 | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-07-12T16:12:58.000000 | 2025-07-12 16:13:01.544629 | orchestrator | | fc13ebe6-24b4-4b6d-a70f-4210ddd288d2 | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-07-12T16:12:52.000000 | 2025-07-12 16:13:01.544640 | orchestrator | | b10e8ae9-36a8-46e0-973e-fde1901cca23 | nova-compute | testbed-node-4 | nova | enabled | up | 2025-07-12T16:12:54.000000 | 2025-07-12 16:13:01.544651 | orchestrator | | f67f0ba8-1f38-4c38-bf2f-6d10dccc3865 | nova-compute | testbed-node-3 | nova | enabled | up | 2025-07-12T16:12:55.000000 | 2025-07-12 16:13:01.544661 | orchestrator | | cb100237-0da6-448d-9a1d-d8135be5a494 | nova-compute | testbed-node-5 | nova | enabled | up | 2025-07-12T16:12:55.000000 | 2025-07-12 16:13:01.544673 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-07-12 16:13:01.773488 | orchestrator | + openstack hypervisor list 2025-07-12 16:13:06.078440 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-07-12 16:13:06.078571 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-07-12 16:13:06.078587 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-07-12 16:13:06.078598 | orchestrator | | 719526b5-a311-41ef-a1bd-45ced69b815b | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-07-12 16:13:06.078609 | orchestrator | | 0b253178-114b-4fd3-be38-7b48fb7de6f5 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-07-12 16:13:06.078620 | orchestrator | | 89b6e6bf-ea68-4992-ae46-88e9c53c7587 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-07-12 16:13:06.078631 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-07-12 16:13:06.308947 | orchestrator | 2025-07-12 16:13:06.309050 | orchestrator | # Run OpenStack test play 2025-07-12 16:13:06.309066 | orchestrator | 2025-07-12 16:13:06.309078 | orchestrator | + echo 2025-07-12 16:13:06.309090 | orchestrator | + echo '# Run OpenStack test play' 2025-07-12 16:13:06.309102 | orchestrator | + echo 2025-07-12 16:13:06.309113 | orchestrator | + osism apply --environment openstack test 2025-07-12 16:13:08.027999 | orchestrator | 2025-07-12 16:13:08 | INFO  | Trying to run play test in environment openstack 2025-07-12 16:13:08.093614 | orchestrator | 2025-07-12 16:13:08 | INFO  | Task 625016bb-ac9c-44ec-9923-1b7de0dcb1a7 (test) was prepared for execution. 2025-07-12 16:13:08.093696 | orchestrator | 2025-07-12 16:13:08 | INFO  | It takes a moment until task 625016bb-ac9c-44ec-9923-1b7de0dcb1a7 (test) has been started and output is visible here. 2025-07-12 16:19:07.594485 | orchestrator | 2025-07-12 16:19:07.594616 | orchestrator | PLAY [Create test project] ***************************************************** 2025-07-12 16:19:07.594636 | orchestrator | 2025-07-12 16:19:07.594648 | orchestrator | TASK [Create test domain] ****************************************************** 2025-07-12 16:19:07.594660 | orchestrator | Saturday 12 July 2025 16:13:12 +0000 (0:00:00.083) 0:00:00.083 ********* 2025-07-12 16:19:07.594672 | orchestrator | changed: [localhost] 2025-07-12 16:19:07.594684 | orchestrator | 2025-07-12 16:19:07.594695 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-07-12 16:19:07.594773 | orchestrator | Saturday 12 July 2025 16:13:16 +0000 (0:00:03.749) 0:00:03.832 ********* 2025-07-12 16:19:07.594789 | orchestrator | changed: [localhost] 2025-07-12 16:19:07.594800 | orchestrator | 2025-07-12 16:19:07.594811 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-07-12 16:19:07.594822 | orchestrator | Saturday 12 July 2025 16:13:20 +0000 (0:00:04.283) 0:00:08.115 ********* 2025-07-12 16:19:07.594833 | orchestrator | changed: [localhost] 2025-07-12 16:19:07.594843 | orchestrator | 2025-07-12 16:19:07.594854 | orchestrator | TASK [Create test project] ***************************************************** 2025-07-12 16:19:07.594865 | orchestrator | Saturday 12 July 2025 16:13:26 +0000 (0:00:06.538) 0:00:14.654 ********* 2025-07-12 16:19:07.594876 | orchestrator | changed: [localhost] 2025-07-12 16:19:07.594886 | orchestrator | 2025-07-12 16:19:07.594897 | orchestrator | TASK [Create test user] ******************************************************** 2025-07-12 16:19:07.594908 | orchestrator | Saturday 12 July 2025 16:13:30 +0000 (0:00:04.028) 0:00:18.682 ********* 2025-07-12 16:19:07.594919 | orchestrator | changed: [localhost] 2025-07-12 16:19:07.594930 | orchestrator | 2025-07-12 16:19:07.594940 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-07-12 16:19:07.594951 | orchestrator | Saturday 12 July 2025 16:13:34 +0000 (0:00:04.054) 0:00:22.736 ********* 2025-07-12 16:19:07.594962 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-07-12 16:19:07.594974 | orchestrator | changed: [localhost] => (item=member) 2025-07-12 16:19:07.594985 | orchestrator | changed: [localhost] => (item=creator) 2025-07-12 16:19:07.594996 | orchestrator | 2025-07-12 16:19:07.595007 | orchestrator | TASK [Create test server group] ************************************************ 2025-07-12 16:19:07.595019 | orchestrator | Saturday 12 July 2025 16:13:46 +0000 (0:00:11.479) 0:00:34.215 ********* 2025-07-12 16:19:07.595144 | orchestrator | changed: [localhost] 2025-07-12 16:19:07.595160 | orchestrator | 2025-07-12 16:19:07.595173 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-07-12 16:19:07.595186 | orchestrator | Saturday 12 July 2025 16:13:50 +0000 (0:00:04.048) 0:00:38.264 ********* 2025-07-12 16:19:07.595198 | orchestrator | changed: [localhost] 2025-07-12 16:19:07.595211 | orchestrator | 2025-07-12 16:19:07.595223 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-07-12 16:19:07.595235 | orchestrator | Saturday 12 July 2025 16:13:55 +0000 (0:00:04.724) 0:00:42.989 ********* 2025-07-12 16:19:07.595248 | orchestrator | changed: [localhost] 2025-07-12 16:19:07.595260 | orchestrator | 2025-07-12 16:19:07.595273 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-07-12 16:19:07.595285 | orchestrator | Saturday 12 July 2025 16:13:59 +0000 (0:00:04.136) 0:00:47.126 ********* 2025-07-12 16:19:07.595297 | orchestrator | changed: [localhost] 2025-07-12 16:19:07.595310 | orchestrator | 2025-07-12 16:19:07.595322 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-07-12 16:19:07.595335 | orchestrator | Saturday 12 July 2025 16:14:03 +0000 (0:00:03.830) 0:00:50.956 ********* 2025-07-12 16:19:07.595347 | orchestrator | changed: [localhost] 2025-07-12 16:19:07.595360 | orchestrator | 2025-07-12 16:19:07.595372 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-07-12 16:19:07.595382 | orchestrator | Saturday 12 July 2025 16:14:07 +0000 (0:00:03.929) 0:00:54.886 ********* 2025-07-12 16:19:07.595393 | orchestrator | changed: [localhost] 2025-07-12 16:19:07.595403 | orchestrator | 2025-07-12 16:19:07.595414 | orchestrator | TASK [Create test network topology] ******************************************** 2025-07-12 16:19:07.595425 | orchestrator | Saturday 12 July 2025 16:14:11 +0000 (0:00:04.270) 0:00:59.156 ********* 2025-07-12 16:19:07.595435 | orchestrator | changed: [localhost] 2025-07-12 16:19:07.595446 | orchestrator | 2025-07-12 16:19:07.595457 | orchestrator | TASK [Create test instances] *************************************************** 2025-07-12 16:19:07.595467 | orchestrator | Saturday 12 July 2025 16:14:27 +0000 (0:00:16.450) 0:01:15.607 ********* 2025-07-12 16:19:07.595478 | orchestrator | changed: [localhost] => (item=test) 2025-07-12 16:19:07.595489 | orchestrator | changed: [localhost] => (item=test-1) 2025-07-12 16:19:07.595500 | orchestrator | changed: [localhost] => (item=test-2) 2025-07-12 16:19:07.595510 | orchestrator | 2025-07-12 16:19:07.595521 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-07-12 16:19:07.595531 | orchestrator | changed: [localhost] => (item=test-3) 2025-07-12 16:19:07.595543 | orchestrator | 2025-07-12 16:19:07.595553 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-07-12 16:19:07.595564 | orchestrator | 2025-07-12 16:19:07.595574 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-07-12 16:19:07.595585 | orchestrator | changed: [localhost] => (item=test-4) 2025-07-12 16:19:07.595596 | orchestrator | 2025-07-12 16:19:07.595606 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-07-12 16:19:07.595617 | orchestrator | Saturday 12 July 2025 16:17:43 +0000 (0:03:16.026) 0:04:31.633 ********* 2025-07-12 16:19:07.595627 | orchestrator | changed: [localhost] => (item=test) 2025-07-12 16:19:07.595638 | orchestrator | changed: [localhost] => (item=test-1) 2025-07-12 16:19:07.595649 | orchestrator | changed: [localhost] => (item=test-2) 2025-07-12 16:19:07.595660 | orchestrator | changed: [localhost] => (item=test-3) 2025-07-12 16:19:07.595670 | orchestrator | changed: [localhost] => (item=test-4) 2025-07-12 16:19:07.595681 | orchestrator | 2025-07-12 16:19:07.595691 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-07-12 16:19:07.595731 | orchestrator | Saturday 12 July 2025 16:18:07 +0000 (0:00:23.899) 0:04:55.533 ********* 2025-07-12 16:19:07.595746 | orchestrator | changed: [localhost] => (item=test) 2025-07-12 16:19:07.595757 | orchestrator | changed: [localhost] => (item=test-1) 2025-07-12 16:19:07.595797 | orchestrator | changed: [localhost] => (item=test-2) 2025-07-12 16:19:07.595808 | orchestrator | changed: [localhost] => (item=test-3) 2025-07-12 16:19:07.595823 | orchestrator | changed: [localhost] => (item=test-4) 2025-07-12 16:19:07.595834 | orchestrator | 2025-07-12 16:19:07.595845 | orchestrator | TASK [Create test volume] ****************************************************** 2025-07-12 16:19:07.595856 | orchestrator | Saturday 12 July 2025 16:18:41 +0000 (0:00:33.367) 0:05:28.901 ********* 2025-07-12 16:19:07.595867 | orchestrator | changed: [localhost] 2025-07-12 16:19:07.595877 | orchestrator | 2025-07-12 16:19:07.595888 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-07-12 16:19:07.595899 | orchestrator | Saturday 12 July 2025 16:18:48 +0000 (0:00:07.427) 0:05:36.328 ********* 2025-07-12 16:19:07.595916 | orchestrator | changed: [localhost] 2025-07-12 16:19:07.595936 | orchestrator | 2025-07-12 16:19:07.595955 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-07-12 16:19:07.595966 | orchestrator | Saturday 12 July 2025 16:19:02 +0000 (0:00:13.613) 0:05:49.942 ********* 2025-07-12 16:19:07.595978 | orchestrator | ok: [localhost] 2025-07-12 16:19:07.595988 | orchestrator | 2025-07-12 16:19:07.595999 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-07-12 16:19:07.596010 | orchestrator | Saturday 12 July 2025 16:19:07 +0000 (0:00:05.083) 0:05:55.025 ********* 2025-07-12 16:19:07.596021 | orchestrator | ok: [localhost] => { 2025-07-12 16:19:07.596032 | orchestrator |  "msg": "192.168.112.121" 2025-07-12 16:19:07.596043 | orchestrator | } 2025-07-12 16:19:07.596054 | orchestrator | 2025-07-12 16:19:07.596065 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 16:19:07.596076 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 16:19:07.596089 | orchestrator | 2025-07-12 16:19:07.596100 | orchestrator | 2025-07-12 16:19:07.596110 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 16:19:07.596121 | orchestrator | Saturday 12 July 2025 16:19:07 +0000 (0:00:00.047) 0:05:55.073 ********* 2025-07-12 16:19:07.596132 | orchestrator | =============================================================================== 2025-07-12 16:19:07.596142 | orchestrator | Create test instances ------------------------------------------------- 196.03s 2025-07-12 16:19:07.596153 | orchestrator | Add tag to instances --------------------------------------------------- 33.37s 2025-07-12 16:19:07.596163 | orchestrator | Add metadata to instances ---------------------------------------------- 23.90s 2025-07-12 16:19:07.596174 | orchestrator | Create test network topology ------------------------------------------- 16.45s 2025-07-12 16:19:07.596184 | orchestrator | Attach test volume ----------------------------------------------------- 13.61s 2025-07-12 16:19:07.596195 | orchestrator | Add member roles to user test ------------------------------------------ 11.48s 2025-07-12 16:19:07.596205 | orchestrator | Create test volume ------------------------------------------------------ 7.43s 2025-07-12 16:19:07.596216 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.54s 2025-07-12 16:19:07.596227 | orchestrator | Create floating ip address ---------------------------------------------- 5.08s 2025-07-12 16:19:07.596237 | orchestrator | Create ssh security group ----------------------------------------------- 4.72s 2025-07-12 16:19:07.596248 | orchestrator | Create test-admin user -------------------------------------------------- 4.28s 2025-07-12 16:19:07.596258 | orchestrator | Create test keypair ----------------------------------------------------- 4.27s 2025-07-12 16:19:07.596269 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.14s 2025-07-12 16:19:07.596279 | orchestrator | Create test user -------------------------------------------------------- 4.05s 2025-07-12 16:19:07.596306 | orchestrator | Create test server group ------------------------------------------------ 4.05s 2025-07-12 16:19:07.596317 | orchestrator | Create test project ----------------------------------------------------- 4.03s 2025-07-12 16:19:07.596328 | orchestrator | Add rule to icmp security group ----------------------------------------- 3.93s 2025-07-12 16:19:07.596346 | orchestrator | Create icmp security group ---------------------------------------------- 3.83s 2025-07-12 16:19:07.596362 | orchestrator | Create test domain ------------------------------------------------------ 3.75s 2025-07-12 16:19:07.596373 | orchestrator | Print floating ip address ----------------------------------------------- 0.05s 2025-07-12 16:19:07.859167 | orchestrator | + server_list 2025-07-12 16:19:07.859264 | orchestrator | + openstack --os-cloud test server list 2025-07-12 16:19:11.689338 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-07-12 16:19:11.689416 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-07-12 16:19:11.689421 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-07-12 16:19:11.689425 | orchestrator | | 56a27700-519e-451a-a74f-44fbf72861f3 | test-4 | ACTIVE | auto_allocated_network=10.42.0.34, 192.168.112.186 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-12 16:19:11.689430 | orchestrator | | 2c28d889-0545-44f7-a347-12590119b738 | test-3 | ACTIVE | auto_allocated_network=10.42.0.49, 192.168.112.149 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-12 16:19:11.689434 | orchestrator | | c5442d41-99ad-4b91-950d-c9ff937e88ce | test-2 | ACTIVE | auto_allocated_network=10.42.0.3, 192.168.112.192 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-12 16:19:11.689438 | orchestrator | | 9c02b027-9f83-465c-9c09-9faa56e97b8e | test-1 | ACTIVE | auto_allocated_network=10.42.0.44, 192.168.112.108 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-12 16:19:11.689442 | orchestrator | | 54fbd094-eb02-4a2b-a4a3-7d8e0f292074 | test | ACTIVE | auto_allocated_network=10.42.0.19, 192.168.112.121 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-12 16:19:11.689446 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-07-12 16:19:11.986196 | orchestrator | + openstack --os-cloud test server show test 2025-07-12 16:19:15.564814 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 16:19:15.564921 | orchestrator | | Field | Value | 2025-07-12 16:19:15.564939 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 16:19:15.564951 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-12 16:19:15.564962 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-12 16:19:15.564974 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-12 16:19:15.565006 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-07-12 16:19:15.565025 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-12 16:19:15.565037 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-12 16:19:15.565048 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-12 16:19:15.565059 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-12 16:19:15.565089 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-12 16:19:15.565100 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-12 16:19:15.565111 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-12 16:19:15.565122 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-12 16:19:15.565133 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-12 16:19:15.565144 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-12 16:19:15.565162 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-12 16:19:15.565173 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-12T16:14:58.000000 | 2025-07-12 16:19:15.565188 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-12 16:19:15.565199 | orchestrator | | accessIPv4 | | 2025-07-12 16:19:15.565210 | orchestrator | | accessIPv6 | | 2025-07-12 16:19:15.565221 | orchestrator | | addresses | auto_allocated_network=10.42.0.19, 192.168.112.121 | 2025-07-12 16:19:15.565239 | orchestrator | | config_drive | | 2025-07-12 16:19:15.565250 | orchestrator | | created | 2025-07-12T16:14:36Z | 2025-07-12 16:19:15.565261 | orchestrator | | description | None | 2025-07-12 16:19:15.565272 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-12 16:19:15.565290 | orchestrator | | hostId | 267ab6df412a33517e69b2cd42fd363ea018d68e563e64fb9c5c2042 | 2025-07-12 16:19:15.565301 | orchestrator | | host_status | None | 2025-07-12 16:19:15.565312 | orchestrator | | id | 54fbd094-eb02-4a2b-a4a3-7d8e0f292074 | 2025-07-12 16:19:15.565327 | orchestrator | | image | Cirros 0.6.2 (c85d1349-eb9f-46df-8984-63eaa3c719d4) | 2025-07-12 16:19:15.565338 | orchestrator | | key_name | test | 2025-07-12 16:19:15.565349 | orchestrator | | locked | False | 2025-07-12 16:19:15.565360 | orchestrator | | locked_reason | None | 2025-07-12 16:19:15.565371 | orchestrator | | name | test | 2025-07-12 16:19:15.565389 | orchestrator | | pinned_availability_zone | None | 2025-07-12 16:19:15.565400 | orchestrator | | progress | 0 | 2025-07-12 16:19:15.565412 | orchestrator | | project_id | 26627ad1612b4266b669a1f25dbbab46 | 2025-07-12 16:19:15.565428 | orchestrator | | properties | hostname='test' | 2025-07-12 16:19:15.565439 | orchestrator | | security_groups | name='icmp' | 2025-07-12 16:19:15.565450 | orchestrator | | | name='ssh' | 2025-07-12 16:19:15.565461 | orchestrator | | server_groups | None | 2025-07-12 16:19:15.565473 | orchestrator | | status | ACTIVE | 2025-07-12 16:19:15.565484 | orchestrator | | tags | test | 2025-07-12 16:19:15.565495 | orchestrator | | trusted_image_certificates | None | 2025-07-12 16:19:15.565506 | orchestrator | | updated | 2025-07-12T16:17:48Z | 2025-07-12 16:19:15.565522 | orchestrator | | user_id | f97f5b5609db404fba61b250d0ac2f18 | 2025-07-12 16:19:15.565533 | orchestrator | | volumes_attached | delete_on_termination='False', id='b6f2baf4-a29c-4f3f-a342-1908a4a4c41a' | 2025-07-12 16:19:15.568811 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 16:19:15.812196 | orchestrator | + openstack --os-cloud test server show test-1 2025-07-12 16:19:19.025474 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 16:19:19.025578 | orchestrator | | Field | Value | 2025-07-12 16:19:19.025595 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 16:19:19.025607 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-12 16:19:19.025633 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-12 16:19:19.025644 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-12 16:19:19.025656 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-07-12 16:19:19.025668 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-12 16:19:19.025679 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-12 16:19:19.025691 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-12 16:19:19.025756 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-12 16:19:19.025796 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-12 16:19:19.025809 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-12 16:19:19.025820 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-12 16:19:19.025831 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-12 16:19:19.025842 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-12 16:19:19.025857 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-12 16:19:19.025869 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-12 16:19:19.025880 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-12T16:15:42.000000 | 2025-07-12 16:19:19.025891 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-12 16:19:19.025902 | orchestrator | | accessIPv4 | | 2025-07-12 16:19:19.025920 | orchestrator | | accessIPv6 | | 2025-07-12 16:19:19.025932 | orchestrator | | addresses | auto_allocated_network=10.42.0.44, 192.168.112.108 | 2025-07-12 16:19:19.025979 | orchestrator | | config_drive | | 2025-07-12 16:19:19.025994 | orchestrator | | created | 2025-07-12T16:15:20Z | 2025-07-12 16:19:19.026007 | orchestrator | | description | None | 2025-07-12 16:19:19.026096 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-12 16:19:19.026111 | orchestrator | | hostId | 0e1f0233e0ec9c93f9bca7de1cf08eb01828556ba217732aef75517a | 2025-07-12 16:19:19.026129 | orchestrator | | host_status | None | 2025-07-12 16:19:19.026142 | orchestrator | | id | 9c02b027-9f83-465c-9c09-9faa56e97b8e | 2025-07-12 16:19:19.026153 | orchestrator | | image | Cirros 0.6.2 (c85d1349-eb9f-46df-8984-63eaa3c719d4) | 2025-07-12 16:19:19.026164 | orchestrator | | key_name | test | 2025-07-12 16:19:19.026191 | orchestrator | | locked | False | 2025-07-12 16:19:19.026209 | orchestrator | | locked_reason | None | 2025-07-12 16:19:19.026220 | orchestrator | | name | test-1 | 2025-07-12 16:19:19.026240 | orchestrator | | pinned_availability_zone | None | 2025-07-12 16:19:19.026251 | orchestrator | | progress | 0 | 2025-07-12 16:19:19.026262 | orchestrator | | project_id | 26627ad1612b4266b669a1f25dbbab46 | 2025-07-12 16:19:19.026273 | orchestrator | | properties | hostname='test-1' | 2025-07-12 16:19:19.026289 | orchestrator | | security_groups | name='icmp' | 2025-07-12 16:19:19.026301 | orchestrator | | | name='ssh' | 2025-07-12 16:19:19.026311 | orchestrator | | server_groups | None | 2025-07-12 16:19:19.026322 | orchestrator | | status | ACTIVE | 2025-07-12 16:19:19.026346 | orchestrator | | tags | test | 2025-07-12 16:19:19.026358 | orchestrator | | trusted_image_certificates | None | 2025-07-12 16:19:19.026368 | orchestrator | | updated | 2025-07-12T16:17:53Z | 2025-07-12 16:19:19.026385 | orchestrator | | user_id | f97f5b5609db404fba61b250d0ac2f18 | 2025-07-12 16:19:19.026396 | orchestrator | | volumes_attached | | 2025-07-12 16:19:19.034991 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 16:19:19.313146 | orchestrator | + openstack --os-cloud test server show test-2 2025-07-12 16:19:22.458502 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 16:19:22.459376 | orchestrator | | Field | Value | 2025-07-12 16:19:22.459427 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 16:19:22.459442 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-12 16:19:22.459476 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-12 16:19:22.459489 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-12 16:19:22.459502 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-07-12 16:19:22.459514 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-12 16:19:22.459527 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-12 16:19:22.459539 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-12 16:19:22.459551 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-12 16:19:22.459584 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-12 16:19:22.459596 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-12 16:19:22.459608 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-12 16:19:22.459619 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-12 16:19:22.459637 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-12 16:19:22.459648 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-12 16:19:22.459659 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-12 16:19:22.459670 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-12T16:16:22.000000 | 2025-07-12 16:19:22.459681 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-12 16:19:22.459693 | orchestrator | | accessIPv4 | | 2025-07-12 16:19:22.459734 | orchestrator | | accessIPv6 | | 2025-07-12 16:19:22.459748 | orchestrator | | addresses | auto_allocated_network=10.42.0.3, 192.168.112.192 | 2025-07-12 16:19:22.459766 | orchestrator | | config_drive | | 2025-07-12 16:19:22.459778 | orchestrator | | created | 2025-07-12T16:15:59Z | 2025-07-12 16:19:22.459794 | orchestrator | | description | None | 2025-07-12 16:19:22.459819 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-12 16:19:22.459831 | orchestrator | | hostId | e3dd99624d7627b300373558bd82887feab5726482b81069ad1a8d2a | 2025-07-12 16:19:22.459842 | orchestrator | | host_status | None | 2025-07-12 16:19:22.459853 | orchestrator | | id | c5442d41-99ad-4b91-950d-c9ff937e88ce | 2025-07-12 16:19:22.459864 | orchestrator | | image | Cirros 0.6.2 (c85d1349-eb9f-46df-8984-63eaa3c719d4) | 2025-07-12 16:19:22.459875 | orchestrator | | key_name | test | 2025-07-12 16:19:22.459886 | orchestrator | | locked | False | 2025-07-12 16:19:22.459897 | orchestrator | | locked_reason | None | 2025-07-12 16:19:22.459908 | orchestrator | | name | test-2 | 2025-07-12 16:19:22.459925 | orchestrator | | pinned_availability_zone | None | 2025-07-12 16:19:22.459936 | orchestrator | | progress | 0 | 2025-07-12 16:19:22.459959 | orchestrator | | project_id | 26627ad1612b4266b669a1f25dbbab46 | 2025-07-12 16:19:22.459970 | orchestrator | | properties | hostname='test-2' | 2025-07-12 16:19:22.459983 | orchestrator | | security_groups | name='icmp' | 2025-07-12 16:19:22.460003 | orchestrator | | | name='ssh' | 2025-07-12 16:19:22.460029 | orchestrator | | server_groups | None | 2025-07-12 16:19:22.460055 | orchestrator | | status | ACTIVE | 2025-07-12 16:19:22.460074 | orchestrator | | tags | test | 2025-07-12 16:19:22.460093 | orchestrator | | trusted_image_certificates | None | 2025-07-12 16:19:22.460114 | orchestrator | | updated | 2025-07-12T16:17:58Z | 2025-07-12 16:19:22.460144 | orchestrator | | user_id | f97f5b5609db404fba61b250d0ac2f18 | 2025-07-12 16:19:22.460176 | orchestrator | | volumes_attached | | 2025-07-12 16:19:22.462997 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 16:19:22.698649 | orchestrator | + openstack --os-cloud test server show test-3 2025-07-12 16:19:25.795078 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 16:19:25.795193 | orchestrator | | Field | Value | 2025-07-12 16:19:25.795209 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 16:19:25.795221 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-12 16:19:25.795233 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-12 16:19:25.795244 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-12 16:19:25.795255 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-07-12 16:19:25.795266 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-12 16:19:25.795278 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-12 16:19:25.795311 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-12 16:19:25.795323 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-12 16:19:25.795358 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-12 16:19:25.795371 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-12 16:19:25.795382 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-12 16:19:25.795393 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-12 16:19:25.795404 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-12 16:19:25.795415 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-12 16:19:25.795426 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-12 16:19:25.795437 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-12T16:16:58.000000 | 2025-07-12 16:19:25.795448 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-12 16:19:25.795469 | orchestrator | | accessIPv4 | | 2025-07-12 16:19:25.795480 | orchestrator | | accessIPv6 | | 2025-07-12 16:19:25.795492 | orchestrator | | addresses | auto_allocated_network=10.42.0.49, 192.168.112.149 | 2025-07-12 16:19:25.795517 | orchestrator | | config_drive | | 2025-07-12 16:19:25.795538 | orchestrator | | created | 2025-07-12T16:16:43Z | 2025-07-12 16:19:25.795560 | orchestrator | | description | None | 2025-07-12 16:19:25.795589 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-12 16:19:25.795606 | orchestrator | | hostId | 0e1f0233e0ec9c93f9bca7de1cf08eb01828556ba217732aef75517a | 2025-07-12 16:19:25.795624 | orchestrator | | host_status | None | 2025-07-12 16:19:25.795642 | orchestrator | | id | 2c28d889-0545-44f7-a347-12590119b738 | 2025-07-12 16:19:25.795660 | orchestrator | | image | Cirros 0.6.2 (c85d1349-eb9f-46df-8984-63eaa3c719d4) | 2025-07-12 16:19:25.795692 | orchestrator | | key_name | test | 2025-07-12 16:19:25.795712 | orchestrator | | locked | False | 2025-07-12 16:19:25.795759 | orchestrator | | locked_reason | None | 2025-07-12 16:19:25.795774 | orchestrator | | name | test-3 | 2025-07-12 16:19:25.795803 | orchestrator | | pinned_availability_zone | None | 2025-07-12 16:19:25.795815 | orchestrator | | progress | 0 | 2025-07-12 16:19:25.795826 | orchestrator | | project_id | 26627ad1612b4266b669a1f25dbbab46 | 2025-07-12 16:19:25.795837 | orchestrator | | properties | hostname='test-3' | 2025-07-12 16:19:25.795848 | orchestrator | | security_groups | name='icmp' | 2025-07-12 16:19:25.795859 | orchestrator | | | name='ssh' | 2025-07-12 16:19:25.795881 | orchestrator | | server_groups | None | 2025-07-12 16:19:25.795892 | orchestrator | | status | ACTIVE | 2025-07-12 16:19:25.795903 | orchestrator | | tags | test | 2025-07-12 16:19:25.795914 | orchestrator | | trusted_image_certificates | None | 2025-07-12 16:19:25.795925 | orchestrator | | updated | 2025-07-12T16:18:03Z | 2025-07-12 16:19:25.795942 | orchestrator | | user_id | f97f5b5609db404fba61b250d0ac2f18 | 2025-07-12 16:19:25.795961 | orchestrator | | volumes_attached | | 2025-07-12 16:19:25.799854 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 16:19:26.045619 | orchestrator | + openstack --os-cloud test server show test-4 2025-07-12 16:19:29.156944 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 16:19:29.157052 | orchestrator | | Field | Value | 2025-07-12 16:19:29.157069 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 16:19:29.157102 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-12 16:19:29.157115 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-12 16:19:29.157126 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-12 16:19:29.157138 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-07-12 16:19:29.157150 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-12 16:19:29.157162 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-12 16:19:29.157182 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-12 16:19:29.157193 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-12 16:19:29.157223 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-12 16:19:29.157236 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-12 16:19:29.157247 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-12 16:19:29.157266 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-12 16:19:29.157277 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-12 16:19:29.157289 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-12 16:19:29.157300 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-12 16:19:29.157312 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-12T16:17:33.000000 | 2025-07-12 16:19:29.157323 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-12 16:19:29.157339 | orchestrator | | accessIPv4 | | 2025-07-12 16:19:29.157350 | orchestrator | | accessIPv6 | | 2025-07-12 16:19:29.157362 | orchestrator | | addresses | auto_allocated_network=10.42.0.34, 192.168.112.186 | 2025-07-12 16:19:29.157381 | orchestrator | | config_drive | | 2025-07-12 16:19:29.157393 | orchestrator | | created | 2025-07-12T16:17:16Z | 2025-07-12 16:19:29.157410 | orchestrator | | description | None | 2025-07-12 16:19:29.157422 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-12 16:19:29.157434 | orchestrator | | hostId | 267ab6df412a33517e69b2cd42fd363ea018d68e563e64fb9c5c2042 | 2025-07-12 16:19:29.157445 | orchestrator | | host_status | None | 2025-07-12 16:19:29.157457 | orchestrator | | id | 56a27700-519e-451a-a74f-44fbf72861f3 | 2025-07-12 16:19:29.157469 | orchestrator | | image | Cirros 0.6.2 (c85d1349-eb9f-46df-8984-63eaa3c719d4) | 2025-07-12 16:19:29.157482 | orchestrator | | key_name | test | 2025-07-12 16:19:29.157500 | orchestrator | | locked | False | 2025-07-12 16:19:29.157513 | orchestrator | | locked_reason | None | 2025-07-12 16:19:29.157526 | orchestrator | | name | test-4 | 2025-07-12 16:19:29.157552 | orchestrator | | pinned_availability_zone | None | 2025-07-12 16:19:29.157565 | orchestrator | | progress | 0 | 2025-07-12 16:19:29.157578 | orchestrator | | project_id | 26627ad1612b4266b669a1f25dbbab46 | 2025-07-12 16:19:29.157591 | orchestrator | | properties | hostname='test-4' | 2025-07-12 16:19:29.157604 | orchestrator | | security_groups | name='icmp' | 2025-07-12 16:19:29.157618 | orchestrator | | | name='ssh' | 2025-07-12 16:19:29.157630 | orchestrator | | server_groups | None | 2025-07-12 16:19:29.157643 | orchestrator | | status | ACTIVE | 2025-07-12 16:19:29.157656 | orchestrator | | tags | test | 2025-07-12 16:19:29.157674 | orchestrator | | trusted_image_certificates | None | 2025-07-12 16:19:29.157687 | orchestrator | | updated | 2025-07-12T16:18:07Z | 2025-07-12 16:19:29.157711 | orchestrator | | user_id | f97f5b5609db404fba61b250d0ac2f18 | 2025-07-12 16:19:29.157751 | orchestrator | | volumes_attached | | 2025-07-12 16:19:29.162576 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 16:19:29.453790 | orchestrator | + server_ping 2025-07-12 16:19:29.457985 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-07-12 16:19:29.458130 | orchestrator | ++ tr -d '\r' 2025-07-12 16:19:32.414913 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 16:19:32.414959 | orchestrator | + ping -c3 192.168.112.108 2025-07-12 16:19:32.429002 | orchestrator | PING 192.168.112.108 (192.168.112.108) 56(84) bytes of data. 2025-07-12 16:19:32.429050 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=1 ttl=63 time=8.81 ms 2025-07-12 16:19:33.425819 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=2 ttl=63 time=3.36 ms 2025-07-12 16:19:34.426679 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=3 ttl=63 time=2.00 ms 2025-07-12 16:19:34.426773 | orchestrator | 2025-07-12 16:19:34.426789 | orchestrator | --- 192.168.112.108 ping statistics --- 2025-07-12 16:19:34.426801 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-07-12 16:19:34.426812 | orchestrator | rtt min/avg/max/mdev = 1.999/4.722/8.806/2.940 ms 2025-07-12 16:19:34.427592 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 16:19:34.427615 | orchestrator | + ping -c3 192.168.112.186 2025-07-12 16:19:34.437316 | orchestrator | PING 192.168.112.186 (192.168.112.186) 56(84) bytes of data. 2025-07-12 16:19:34.437372 | orchestrator | 64 bytes from 192.168.112.186: icmp_seq=1 ttl=63 time=5.38 ms 2025-07-12 16:19:35.435886 | orchestrator | 64 bytes from 192.168.112.186: icmp_seq=2 ttl=63 time=2.29 ms 2025-07-12 16:19:36.436880 | orchestrator | 64 bytes from 192.168.112.186: icmp_seq=3 ttl=63 time=1.90 ms 2025-07-12 16:19:36.436925 | orchestrator | 2025-07-12 16:19:36.436932 | orchestrator | --- 192.168.112.186 ping statistics --- 2025-07-12 16:19:36.436938 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 16:19:36.436942 | orchestrator | rtt min/avg/max/mdev = 1.900/3.192/5.384/1.558 ms 2025-07-12 16:19:36.438581 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 16:19:36.438593 | orchestrator | + ping -c3 192.168.112.192 2025-07-12 16:19:36.450993 | orchestrator | PING 192.168.112.192 (192.168.112.192) 56(84) bytes of data. 2025-07-12 16:19:36.451003 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=1 ttl=63 time=8.29 ms 2025-07-12 16:19:37.447470 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=2 ttl=63 time=2.82 ms 2025-07-12 16:19:38.448835 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=3 ttl=63 time=2.03 ms 2025-07-12 16:19:38.449923 | orchestrator | 2025-07-12 16:19:38.449962 | orchestrator | --- 192.168.112.192 ping statistics --- 2025-07-12 16:19:38.449976 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-07-12 16:19:38.449988 | orchestrator | rtt min/avg/max/mdev = 2.027/4.376/8.286/2.783 ms 2025-07-12 16:19:38.450014 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 16:19:38.450095 | orchestrator | + ping -c3 192.168.112.121 2025-07-12 16:19:38.466261 | orchestrator | PING 192.168.112.121 (192.168.112.121) 56(84) bytes of data. 2025-07-12 16:19:38.466301 | orchestrator | 64 bytes from 192.168.112.121: icmp_seq=1 ttl=63 time=11.6 ms 2025-07-12 16:19:39.459217 | orchestrator | 64 bytes from 192.168.112.121: icmp_seq=2 ttl=63 time=3.31 ms 2025-07-12 16:19:40.459191 | orchestrator | 64 bytes from 192.168.112.121: icmp_seq=3 ttl=63 time=1.67 ms 2025-07-12 16:19:40.459307 | orchestrator | 2025-07-12 16:19:40.459322 | orchestrator | --- 192.168.112.121 ping statistics --- 2025-07-12 16:19:40.459335 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 16:19:40.459346 | orchestrator | rtt min/avg/max/mdev = 1.672/5.509/11.552/4.324 ms 2025-07-12 16:19:40.459784 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 16:19:40.459811 | orchestrator | + ping -c3 192.168.112.149 2025-07-12 16:19:40.470865 | orchestrator | PING 192.168.112.149 (192.168.112.149) 56(84) bytes of data. 2025-07-12 16:19:40.470913 | orchestrator | 64 bytes from 192.168.112.149: icmp_seq=1 ttl=63 time=6.17 ms 2025-07-12 16:19:41.469404 | orchestrator | 64 bytes from 192.168.112.149: icmp_seq=2 ttl=63 time=3.05 ms 2025-07-12 16:19:42.469434 | orchestrator | 64 bytes from 192.168.112.149: icmp_seq=3 ttl=63 time=1.99 ms 2025-07-12 16:19:42.469550 | orchestrator | 2025-07-12 16:19:42.469567 | orchestrator | --- 192.168.112.149 ping statistics --- 2025-07-12 16:19:42.469580 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-07-12 16:19:42.469591 | orchestrator | rtt min/avg/max/mdev = 1.992/3.737/6.167/1.771 ms 2025-07-12 16:19:42.470150 | orchestrator | + [[ 9.2.0 == \l\a\t\e\s\t ]] 2025-07-12 16:19:42.566321 | orchestrator | ok: Runtime: 0:11:41.671252 2025-07-12 16:19:42.605868 | 2025-07-12 16:19:42.606001 | TASK [Run tempest] 2025-07-12 16:19:43.141239 | orchestrator | skipping: Conditional result was False 2025-07-12 16:19:43.157535 | 2025-07-12 16:19:43.157711 | TASK [Check prometheus alert status] 2025-07-12 16:19:43.693481 | orchestrator | skipping: Conditional result was False 2025-07-12 16:19:43.696927 | 2025-07-12 16:19:43.697101 | PLAY RECAP 2025-07-12 16:19:43.697255 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-07-12 16:19:43.697324 | 2025-07-12 16:19:43.930725 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-07-12 16:19:43.932986 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-07-12 16:19:44.711841 | 2025-07-12 16:19:44.712731 | PLAY [Post output play] 2025-07-12 16:19:44.729601 | 2025-07-12 16:19:44.729788 | LOOP [stage-output : Register sources] 2025-07-12 16:19:44.795216 | 2025-07-12 16:19:44.795518 | TASK [stage-output : Check sudo] 2025-07-12 16:19:45.645119 | orchestrator | sudo: a password is required 2025-07-12 16:19:45.847365 | orchestrator | ok: Runtime: 0:00:00.023129 2025-07-12 16:19:45.863514 | 2025-07-12 16:19:45.863765 | LOOP [stage-output : Set source and destination for files and folders] 2025-07-12 16:19:45.901428 | 2025-07-12 16:19:45.901749 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-07-12 16:19:45.970413 | orchestrator | ok 2025-07-12 16:19:45.978985 | 2025-07-12 16:19:45.979146 | LOOP [stage-output : Ensure target folders exist] 2025-07-12 16:19:46.446135 | orchestrator | ok: "docs" 2025-07-12 16:19:46.446482 | 2025-07-12 16:19:46.687785 | orchestrator | ok: "artifacts" 2025-07-12 16:19:46.928005 | orchestrator | ok: "logs" 2025-07-12 16:19:46.944274 | 2025-07-12 16:19:46.944461 | LOOP [stage-output : Copy files and folders to staging folder] 2025-07-12 16:19:46.979053 | 2025-07-12 16:19:46.979314 | TASK [stage-output : Make all log files readable] 2025-07-12 16:19:47.270636 | orchestrator | ok 2025-07-12 16:19:47.279261 | 2025-07-12 16:19:47.279430 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-07-12 16:19:47.316481 | orchestrator | skipping: Conditional result was False 2025-07-12 16:19:47.332636 | 2025-07-12 16:19:47.332887 | TASK [stage-output : Discover log files for compression] 2025-07-12 16:19:47.357572 | orchestrator | skipping: Conditional result was False 2025-07-12 16:19:47.371239 | 2025-07-12 16:19:47.371422 | LOOP [stage-output : Archive everything from logs] 2025-07-12 16:19:47.422753 | 2025-07-12 16:19:47.423079 | PLAY [Post cleanup play] 2025-07-12 16:19:47.432981 | 2025-07-12 16:19:47.433140 | TASK [Set cloud fact (Zuul deployment)] 2025-07-12 16:19:47.501953 | orchestrator | ok 2025-07-12 16:19:47.514149 | 2025-07-12 16:19:47.514302 | TASK [Set cloud fact (local deployment)] 2025-07-12 16:19:47.539417 | orchestrator | skipping: Conditional result was False 2025-07-12 16:19:47.552370 | 2025-07-12 16:19:47.552539 | TASK [Clean the cloud environment] 2025-07-12 16:19:48.617343 | orchestrator | 2025-07-12 16:19:48 - clean up servers 2025-07-12 16:19:49.359631 | orchestrator | 2025-07-12 16:19:49 - testbed-manager 2025-07-12 16:19:49.537269 | orchestrator | 2025-07-12 16:19:49 - testbed-node-4 2025-07-12 16:19:49.627760 | orchestrator | 2025-07-12 16:19:49 - testbed-node-3 2025-07-12 16:19:49.716092 | orchestrator | 2025-07-12 16:19:49 - testbed-node-0 2025-07-12 16:19:49.810686 | orchestrator | 2025-07-12 16:19:49 - testbed-node-2 2025-07-12 16:19:49.904552 | orchestrator | 2025-07-12 16:19:49 - testbed-node-1 2025-07-12 16:19:49.994226 | orchestrator | 2025-07-12 16:19:49 - testbed-node-5 2025-07-12 16:19:50.086537 | orchestrator | 2025-07-12 16:19:50 - clean up keypairs 2025-07-12 16:19:50.108067 | orchestrator | 2025-07-12 16:19:50 - testbed 2025-07-12 16:19:50.135905 | orchestrator | 2025-07-12 16:19:50 - wait for servers to be gone 2025-07-12 16:20:01.114940 | orchestrator | 2025-07-12 16:20:01 - clean up ports 2025-07-12 16:20:01.340492 | orchestrator | 2025-07-12 16:20:01 - 14d43053-6697-40b9-a14f-0d9583703cc0 2025-07-12 16:20:01.757616 | orchestrator | 2025-07-12 16:20:01 - 202297b1-d773-43e7-ba54-2e00f0b4196b 2025-07-12 16:20:02.002500 | orchestrator | 2025-07-12 16:20:02 - 4ddaae55-cec6-4ca5-893c-0d4ee03b591c 2025-07-12 16:20:02.196002 | orchestrator | 2025-07-12 16:20:02 - 52146461-10b4-4dcc-811b-86f2b2e02eb7 2025-07-12 16:20:02.400962 | orchestrator | 2025-07-12 16:20:02 - 55ae7814-2d16-4b33-9b68-88e814fdff4c 2025-07-12 16:20:02.609306 | orchestrator | 2025-07-12 16:20:02 - aed24db6-4188-4468-ab69-c4f86c3fea3a 2025-07-12 16:20:02.813910 | orchestrator | 2025-07-12 16:20:02 - c6d7c1fd-cfce-4900-bfcc-ffbf505b8e2a 2025-07-12 16:20:03.017959 | orchestrator | 2025-07-12 16:20:03 - clean up volumes 2025-07-12 16:20:03.188677 | orchestrator | 2025-07-12 16:20:03 - testbed-volume-0-node-base 2025-07-12 16:20:03.228490 | orchestrator | 2025-07-12 16:20:03 - testbed-volume-2-node-base 2025-07-12 16:20:03.267341 | orchestrator | 2025-07-12 16:20:03 - testbed-volume-5-node-base 2025-07-12 16:20:03.312847 | orchestrator | 2025-07-12 16:20:03 - testbed-volume-1-node-base 2025-07-12 16:20:03.353194 | orchestrator | 2025-07-12 16:20:03 - testbed-volume-4-node-base 2025-07-12 16:20:03.394767 | orchestrator | 2025-07-12 16:20:03 - testbed-volume-3-node-base 2025-07-12 16:20:03.450866 | orchestrator | 2025-07-12 16:20:03 - testbed-volume-manager-base 2025-07-12 16:20:03.493401 | orchestrator | 2025-07-12 16:20:03 - testbed-volume-2-node-5 2025-07-12 16:20:03.541061 | orchestrator | 2025-07-12 16:20:03 - testbed-volume-7-node-4 2025-07-12 16:20:03.585544 | orchestrator | 2025-07-12 16:20:03 - testbed-volume-3-node-3 2025-07-12 16:20:03.636584 | orchestrator | 2025-07-12 16:20:03 - testbed-volume-1-node-4 2025-07-12 16:20:03.679002 | orchestrator | 2025-07-12 16:20:03 - testbed-volume-4-node-4 2025-07-12 16:20:03.721359 | orchestrator | 2025-07-12 16:20:03 - testbed-volume-5-node-5 2025-07-12 16:20:03.764506 | orchestrator | 2025-07-12 16:20:03 - testbed-volume-6-node-3 2025-07-12 16:20:03.807627 | orchestrator | 2025-07-12 16:20:03 - testbed-volume-8-node-5 2025-07-12 16:20:03.859796 | orchestrator | 2025-07-12 16:20:03 - testbed-volume-0-node-3 2025-07-12 16:20:03.905049 | orchestrator | 2025-07-12 16:20:03 - disconnect routers 2025-07-12 16:20:04.020819 | orchestrator | 2025-07-12 16:20:04 - testbed 2025-07-12 16:20:05.051372 | orchestrator | 2025-07-12 16:20:05 - clean up subnets 2025-07-12 16:20:05.115136 | orchestrator | 2025-07-12 16:20:05 - subnet-testbed-management 2025-07-12 16:20:05.283350 | orchestrator | 2025-07-12 16:20:05 - clean up networks 2025-07-12 16:20:05.469406 | orchestrator | 2025-07-12 16:20:05 - net-testbed-management 2025-07-12 16:20:05.744354 | orchestrator | 2025-07-12 16:20:05 - clean up security groups 2025-07-12 16:20:05.790981 | orchestrator | 2025-07-12 16:20:05 - testbed-management 2025-07-12 16:20:05.913253 | orchestrator | 2025-07-12 16:20:05 - testbed-node 2025-07-12 16:20:06.013271 | orchestrator | 2025-07-12 16:20:06 - clean up floating ips 2025-07-12 16:20:06.051814 | orchestrator | 2025-07-12 16:20:06 - 81.163.192.204 2025-07-12 16:20:06.898268 | orchestrator | 2025-07-12 16:20:06 - clean up routers 2025-07-12 16:20:06.998288 | orchestrator | 2025-07-12 16:20:06 - testbed 2025-07-12 16:20:08.608085 | orchestrator | ok: Runtime: 0:00:20.421716 2025-07-12 16:20:08.612864 | 2025-07-12 16:20:08.613032 | PLAY RECAP 2025-07-12 16:20:08.613152 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-07-12 16:20:08.613213 | 2025-07-12 16:20:08.751976 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-07-12 16:20:08.753858 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-07-12 16:20:09.502386 | 2025-07-12 16:20:09.502551 | PLAY [Cleanup play] 2025-07-12 16:20:09.518733 | 2025-07-12 16:20:09.518902 | TASK [Set cloud fact (Zuul deployment)] 2025-07-12 16:20:09.578652 | orchestrator | ok 2025-07-12 16:20:09.592778 | 2025-07-12 16:20:09.592956 | TASK [Set cloud fact (local deployment)] 2025-07-12 16:20:09.627893 | orchestrator | skipping: Conditional result was False 2025-07-12 16:20:09.641865 | 2025-07-12 16:20:09.642012 | TASK [Clean the cloud environment] 2025-07-12 16:20:10.798169 | orchestrator | 2025-07-12 16:20:10 - clean up servers 2025-07-12 16:20:11.292219 | orchestrator | 2025-07-12 16:20:11 - clean up keypairs 2025-07-12 16:20:11.312173 | orchestrator | 2025-07-12 16:20:11 - wait for servers to be gone 2025-07-12 16:20:11.354597 | orchestrator | 2025-07-12 16:20:11 - clean up ports 2025-07-12 16:20:11.429486 | orchestrator | 2025-07-12 16:20:11 - clean up volumes 2025-07-12 16:20:11.491068 | orchestrator | 2025-07-12 16:20:11 - disconnect routers 2025-07-12 16:20:11.524671 | orchestrator | 2025-07-12 16:20:11 - clean up subnets 2025-07-12 16:20:11.546761 | orchestrator | 2025-07-12 16:20:11 - clean up networks 2025-07-12 16:20:11.712327 | orchestrator | 2025-07-12 16:20:11 - clean up security groups 2025-07-12 16:20:11.751036 | orchestrator | 2025-07-12 16:20:11 - clean up floating ips 2025-07-12 16:20:11.778799 | orchestrator | 2025-07-12 16:20:11 - clean up routers 2025-07-12 16:20:12.183638 | orchestrator | ok: Runtime: 0:00:01.360093 2025-07-12 16:20:12.187488 | 2025-07-12 16:20:12.187646 | PLAY RECAP 2025-07-12 16:20:12.187794 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-07-12 16:20:12.187859 | 2025-07-12 16:20:12.323762 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-07-12 16:20:12.324857 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-07-12 16:20:13.093780 | 2025-07-12 16:20:13.093959 | PLAY [Base post-fetch] 2025-07-12 16:20:13.110044 | 2025-07-12 16:20:13.110192 | TASK [fetch-output : Set log path for multiple nodes] 2025-07-12 16:20:13.166735 | orchestrator | skipping: Conditional result was False 2025-07-12 16:20:13.182269 | 2025-07-12 16:20:13.182510 | TASK [fetch-output : Set log path for single node] 2025-07-12 16:20:13.228133 | orchestrator | ok 2025-07-12 16:20:13.238216 | 2025-07-12 16:20:13.238383 | LOOP [fetch-output : Ensure local output dirs] 2025-07-12 16:20:13.741364 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/136d96fce2ea4ab0a333aeec44b1cc40/work/logs" 2025-07-12 16:20:14.026960 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/136d96fce2ea4ab0a333aeec44b1cc40/work/artifacts" 2025-07-12 16:20:14.296187 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/136d96fce2ea4ab0a333aeec44b1cc40/work/docs" 2025-07-12 16:20:14.311499 | 2025-07-12 16:20:14.311637 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-07-12 16:20:15.196563 | orchestrator | changed: .d..t...... ./ 2025-07-12 16:20:15.196937 | orchestrator | changed: All items complete 2025-07-12 16:20:15.197003 | 2025-07-12 16:20:15.911361 | orchestrator | changed: .d..t...... ./ 2025-07-12 16:20:16.653442 | orchestrator | changed: .d..t...... ./ 2025-07-12 16:20:16.676348 | 2025-07-12 16:20:16.676454 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-07-12 16:20:16.709813 | orchestrator | skipping: Conditional result was False 2025-07-12 16:20:16.714558 | orchestrator | skipping: Conditional result was False 2025-07-12 16:20:16.736330 | 2025-07-12 16:20:16.736427 | PLAY RECAP 2025-07-12 16:20:16.736500 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-07-12 16:20:16.736541 | 2025-07-12 16:20:16.823305 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-07-12 16:20:16.825849 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-07-12 16:20:17.491288 | 2025-07-12 16:20:17.492016 | PLAY [Base post] 2025-07-12 16:20:17.505430 | 2025-07-12 16:20:17.505543 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-07-12 16:20:18.753666 | orchestrator | changed 2025-07-12 16:20:18.762706 | 2025-07-12 16:20:18.762822 | PLAY RECAP 2025-07-12 16:20:18.762910 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-07-12 16:20:18.762983 | 2025-07-12 16:20:18.843091 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-07-12 16:20:18.845447 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-07-12 16:20:19.562168 | 2025-07-12 16:20:19.562300 | PLAY [Base post-logs] 2025-07-12 16:20:19.572141 | 2025-07-12 16:20:19.572260 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-07-12 16:20:20.036846 | localhost | changed 2025-07-12 16:20:20.054582 | 2025-07-12 16:20:20.054792 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-07-12 16:20:20.093557 | localhost | ok 2025-07-12 16:20:20.101042 | 2025-07-12 16:20:20.101234 | TASK [Set zuul-log-path fact] 2025-07-12 16:20:20.119613 | localhost | ok 2025-07-12 16:20:20.134088 | 2025-07-12 16:20:20.134235 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-07-12 16:20:20.172500 | localhost | ok 2025-07-12 16:20:20.179443 | 2025-07-12 16:20:20.179618 | TASK [upload-logs : Create log directories] 2025-07-12 16:20:20.661237 | localhost | changed 2025-07-12 16:20:20.667623 | 2025-07-12 16:20:20.667909 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-07-12 16:20:21.215435 | localhost -> localhost | ok: Runtime: 0:00:00.006940 2025-07-12 16:20:21.219548 | 2025-07-12 16:20:21.219711 | TASK [upload-logs : Upload logs to log server] 2025-07-12 16:20:21.797762 | localhost | Output suppressed because no_log was given 2025-07-12 16:20:21.802059 | 2025-07-12 16:20:21.802276 | LOOP [upload-logs : Compress console log and json output] 2025-07-12 16:20:21.861187 | localhost | skipping: Conditional result was False 2025-07-12 16:20:21.866131 | localhost | skipping: Conditional result was False 2025-07-12 16:20:21.880860 | 2025-07-12 16:20:21.881115 | LOOP [upload-logs : Upload compressed console log and json output] 2025-07-12 16:20:21.929705 | localhost | skipping: Conditional result was False 2025-07-12 16:20:21.930389 | 2025-07-12 16:20:21.933815 | localhost | skipping: Conditional result was False 2025-07-12 16:20:21.946390 | 2025-07-12 16:20:21.946589 | LOOP [upload-logs : Upload console log and json output]