2025-09-11 00:00:05.587283 | Job console starting 2025-09-11 00:00:05.606315 | Updating git repos 2025-09-11 00:00:05.666410 | Cloning repos into workspace 2025-09-11 00:00:05.837795 | Restoring repo states 2025-09-11 00:00:05.857989 | Merging changes 2025-09-11 00:00:05.858005 | Checking out repos 2025-09-11 00:00:06.241237 | Preparing playbooks 2025-09-11 00:00:06.772762 | Running Ansible setup 2025-09-11 00:00:11.357474 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-09-11 00:00:12.284963 | 2025-09-11 00:00:12.285095 | PLAY [Base pre] 2025-09-11 00:00:12.301635 | 2025-09-11 00:00:12.301751 | TASK [Setup log path fact] 2025-09-11 00:00:12.330621 | orchestrator | ok 2025-09-11 00:00:12.347201 | 2025-09-11 00:00:12.347325 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-11 00:00:12.385868 | orchestrator | ok 2025-09-11 00:00:12.405869 | 2025-09-11 00:00:12.406072 | TASK [emit-job-header : Print job information] 2025-09-11 00:00:12.467698 | # Job Information 2025-09-11 00:00:12.467865 | Ansible Version: 2.16.14 2025-09-11 00:00:12.467899 | Job: testbed-deploy-in-a-nutshell-with-tempest-ubuntu-24.04 2025-09-11 00:00:12.467932 | Pipeline: periodic-midnight 2025-09-11 00:00:12.467955 | Executor: 521e9411259a 2025-09-11 00:00:12.467975 | Triggered by: https://github.com/osism/testbed 2025-09-11 00:00:12.467997 | Event ID: a2a56783aff64bf280568a1efca327dc 2025-09-11 00:00:12.487906 | 2025-09-11 00:00:12.488033 | LOOP [emit-job-header : Print node information] 2025-09-11 00:00:12.661615 | orchestrator | ok: 2025-09-11 00:00:12.661778 | orchestrator | # Node Information 2025-09-11 00:00:12.661812 | orchestrator | Inventory Hostname: orchestrator 2025-09-11 00:00:12.661838 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-09-11 00:00:12.661859 | orchestrator | Username: zuul-testbed05 2025-09-11 00:00:12.661880 | orchestrator | Distro: Debian 12.12 2025-09-11 00:00:12.661903 | orchestrator | Provider: static-testbed 2025-09-11 00:00:12.661924 | orchestrator | Region: 2025-09-11 00:00:12.661945 | orchestrator | Label: testbed-orchestrator 2025-09-11 00:00:12.661964 | orchestrator | Product Name: OpenStack Nova 2025-09-11 00:00:12.661982 | orchestrator | Interface IP: 81.163.193.140 2025-09-11 00:00:12.674302 | 2025-09-11 00:00:12.674422 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-09-11 00:00:13.578545 | orchestrator -> localhost | changed 2025-09-11 00:00:13.588534 | 2025-09-11 00:00:13.588644 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-09-11 00:00:15.804114 | orchestrator -> localhost | changed 2025-09-11 00:00:15.826605 | 2025-09-11 00:00:15.826706 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-09-11 00:00:16.257848 | orchestrator -> localhost | ok 2025-09-11 00:00:16.265506 | 2025-09-11 00:00:16.265600 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-09-11 00:00:16.308380 | orchestrator | ok 2025-09-11 00:00:16.342239 | orchestrator | included: /var/lib/zuul/builds/111b3509201f44bf8eed852029dc6ac2/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-09-11 00:00:16.374948 | 2025-09-11 00:00:16.375615 | TASK [add-build-sshkey : Create Temp SSH key] 2025-09-11 00:00:19.606793 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-09-11 00:00:19.607145 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/111b3509201f44bf8eed852029dc6ac2/work/111b3509201f44bf8eed852029dc6ac2_id_rsa 2025-09-11 00:00:19.607186 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/111b3509201f44bf8eed852029dc6ac2/work/111b3509201f44bf8eed852029dc6ac2_id_rsa.pub 2025-09-11 00:00:19.607212 | orchestrator -> localhost | The key fingerprint is: 2025-09-11 00:00:19.607237 | orchestrator -> localhost | SHA256:Cf/sbsa10PnRCWae6U942ucwR1iLq0Y3tt9F0CuR7RE zuul-build-sshkey 2025-09-11 00:00:19.607260 | orchestrator -> localhost | The key's randomart image is: 2025-09-11 00:00:19.607293 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-09-11 00:00:19.607316 | orchestrator -> localhost | | E | 2025-09-11 00:00:19.607337 | orchestrator -> localhost | | o..| 2025-09-11 00:00:19.607358 | orchestrator -> localhost | | . o.oo| 2025-09-11 00:00:19.607378 | orchestrator -> localhost | | o . ++++| 2025-09-11 00:00:19.607429 | orchestrator -> localhost | | S .++==+| 2025-09-11 00:00:19.607459 | orchestrator -> localhost | | o..+B=+.| 2025-09-11 00:00:19.607480 | orchestrator -> localhost | | .+o+==+o| 2025-09-11 00:00:19.607500 | orchestrator -> localhost | | .+.oo*++| 2025-09-11 00:00:19.607521 | orchestrator -> localhost | | ++. .o=+| 2025-09-11 00:00:19.607540 | orchestrator -> localhost | +----[SHA256]-----+ 2025-09-11 00:00:19.607594 | orchestrator -> localhost | ok: Runtime: 0:00:02.050309 2025-09-11 00:00:19.614659 | 2025-09-11 00:00:19.614756 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-09-11 00:00:19.654496 | orchestrator | ok 2025-09-11 00:00:19.662289 | orchestrator | included: /var/lib/zuul/builds/111b3509201f44bf8eed852029dc6ac2/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-09-11 00:00:19.700461 | 2025-09-11 00:00:19.700559 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-09-11 00:00:19.725159 | orchestrator | skipping: Conditional result was False 2025-09-11 00:00:19.732050 | 2025-09-11 00:00:19.732135 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-09-11 00:00:20.781582 | orchestrator | changed 2025-09-11 00:00:20.786633 | 2025-09-11 00:00:20.786715 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-09-11 00:00:21.070338 | orchestrator | ok 2025-09-11 00:00:21.075302 | 2025-09-11 00:00:21.075378 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-09-11 00:00:21.515466 | orchestrator | ok 2025-09-11 00:00:21.532026 | 2025-09-11 00:00:21.532122 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-09-11 00:00:21.998854 | orchestrator | ok 2025-09-11 00:00:22.003956 | 2025-09-11 00:00:22.004043 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-09-11 00:00:22.051843 | orchestrator | skipping: Conditional result was False 2025-09-11 00:00:22.057895 | 2025-09-11 00:00:22.057985 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-09-11 00:00:22.917897 | orchestrator -> localhost | changed 2025-09-11 00:00:22.928504 | 2025-09-11 00:00:22.928592 | TASK [add-build-sshkey : Add back temp key] 2025-09-11 00:00:23.573540 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/111b3509201f44bf8eed852029dc6ac2/work/111b3509201f44bf8eed852029dc6ac2_id_rsa (zuul-build-sshkey) 2025-09-11 00:00:23.573723 | orchestrator -> localhost | ok: Runtime: 0:00:00.025317 2025-09-11 00:00:23.579314 | 2025-09-11 00:00:23.579410 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-09-11 00:00:24.156953 | orchestrator | ok 2025-09-11 00:00:24.168401 | 2025-09-11 00:00:24.168502 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-09-11 00:00:24.191000 | orchestrator | skipping: Conditional result was False 2025-09-11 00:00:24.239752 | 2025-09-11 00:00:24.239846 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-09-11 00:00:24.808358 | orchestrator | ok 2025-09-11 00:00:24.834595 | 2025-09-11 00:00:24.834699 | TASK [validate-host : Define zuul_info_dir fact] 2025-09-11 00:00:24.887654 | orchestrator | ok 2025-09-11 00:00:24.893336 | 2025-09-11 00:00:24.893428 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-09-11 00:00:25.616586 | orchestrator -> localhost | ok 2025-09-11 00:00:25.622390 | 2025-09-11 00:00:25.622470 | TASK [validate-host : Collect information about the host] 2025-09-11 00:00:27.059594 | orchestrator | ok 2025-09-11 00:00:27.081270 | 2025-09-11 00:00:27.081389 | TASK [validate-host : Sanitize hostname] 2025-09-11 00:00:27.182596 | orchestrator | ok 2025-09-11 00:00:27.187241 | 2025-09-11 00:00:27.187331 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-09-11 00:00:28.245212 | orchestrator -> localhost | changed 2025-09-11 00:00:28.250257 | 2025-09-11 00:00:28.250343 | TASK [validate-host : Collect information about zuul worker] 2025-09-11 00:00:28.806634 | orchestrator | ok 2025-09-11 00:00:28.811319 | 2025-09-11 00:00:28.811429 | TASK [validate-host : Write out all zuul information for each host] 2025-09-11 00:00:29.759344 | orchestrator -> localhost | changed 2025-09-11 00:00:29.767951 | 2025-09-11 00:00:29.768030 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-09-11 00:00:30.048995 | orchestrator | ok 2025-09-11 00:00:30.053880 | 2025-09-11 00:00:30.053955 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-09-11 00:01:09.696075 | orchestrator | changed: 2025-09-11 00:01:09.696286 | orchestrator | .d..t...... src/ 2025-09-11 00:01:09.696321 | orchestrator | .d..t...... src/github.com/ 2025-09-11 00:01:09.696366 | orchestrator | .d..t...... src/github.com/osism/ 2025-09-11 00:01:09.696389 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-09-11 00:01:09.696410 | orchestrator | RedHat.yml 2025-09-11 00:01:09.728214 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-09-11 00:01:09.728231 | orchestrator | RedHat.yml 2025-09-11 00:01:09.728290 | orchestrator | = 1.53.0"... 2025-09-11 00:01:21.945033 | orchestrator | 00:01:21.944 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-09-11 00:01:22.144622 | orchestrator | 00:01:22.144 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-09-11 00:01:22.549838 | orchestrator | 00:01:22.549 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-09-11 00:01:22.972562 | orchestrator | 00:01:22.972 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-09-11 00:01:23.797500 | orchestrator | 00:01:23.797 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-09-11 00:01:23.873720 | orchestrator | 00:01:23.873 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-09-11 00:01:24.544493 | orchestrator | 00:01:24.544 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-09-11 00:01:24.544549 | orchestrator | 00:01:24.544 STDOUT terraform: Providers are signed by their developers. 2025-09-11 00:01:24.544592 | orchestrator | 00:01:24.544 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-09-11 00:01:24.544622 | orchestrator | 00:01:24.544 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-09-11 00:01:24.544738 | orchestrator | 00:01:24.544 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-09-11 00:01:24.544790 | orchestrator | 00:01:24.544 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-09-11 00:01:24.544835 | orchestrator | 00:01:24.544 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-09-11 00:01:24.544842 | orchestrator | 00:01:24.544 STDOUT terraform: you run "tofu init" in the future. 2025-09-11 00:01:24.545298 | orchestrator | 00:01:24.545 STDOUT terraform: OpenTofu has been successfully initialized! 2025-09-11 00:01:24.545392 | orchestrator | 00:01:24.545 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-09-11 00:01:24.545458 | orchestrator | 00:01:24.545 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-09-11 00:01:24.545465 | orchestrator | 00:01:24.545 STDOUT terraform: should now work. 2025-09-11 00:01:24.545530 | orchestrator | 00:01:24.545 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-09-11 00:01:24.545573 | orchestrator | 00:01:24.545 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-09-11 00:01:24.545618 | orchestrator | 00:01:24.545 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-09-11 00:01:24.662035 | orchestrator | 00:01:24.661 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-09-11 00:01:24.662125 | orchestrator | 00:01:24.662 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-09-11 00:01:24.857538 | orchestrator | 00:01:24.857 STDOUT terraform: Created and switched to workspace "ci"! 2025-09-11 00:01:24.857604 | orchestrator | 00:01:24.857 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-09-11 00:01:24.857614 | orchestrator | 00:01:24.857 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-09-11 00:01:24.857619 | orchestrator | 00:01:24.857 STDOUT terraform: for this configuration. 2025-09-11 00:01:24.989298 | orchestrator | 00:01:24.989 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-09-11 00:01:24.989373 | orchestrator | 00:01:24.989 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-09-11 00:01:25.091881 | orchestrator | 00:01:25.091 STDOUT terraform: ci.auto.tfvars 2025-09-11 00:01:25.094886 | orchestrator | 00:01:25.094 STDOUT terraform: default_custom.tf 2025-09-11 00:01:25.213558 | orchestrator | 00:01:25.213 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-09-11 00:01:26.187530 | orchestrator | 00:01:26.187 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-09-11 00:01:26.707571 | orchestrator | 00:01:26.707 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-09-11 00:01:26.988625 | orchestrator | 00:01:26.988 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-09-11 00:01:26.988699 | orchestrator | 00:01:26.988 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-09-11 00:01:26.988706 | orchestrator | 00:01:26.988 STDOUT terraform:  + create 2025-09-11 00:01:26.988723 | orchestrator | 00:01:26.988 STDOUT terraform:  <= read (data resources) 2025-09-11 00:01:26.988807 | orchestrator | 00:01:26.988 STDOUT terraform: OpenTofu will perform the following actions: 2025-09-11 00:01:26.988925 | orchestrator | 00:01:26.988 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-09-11 00:01:26.988983 | orchestrator | 00:01:26.988 STDOUT terraform:  # (config refers to values not yet known) 2025-09-11 00:01:26.989058 | orchestrator | 00:01:26.988 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-09-11 00:01:26.989118 | orchestrator | 00:01:26.989 STDOUT terraform:  + checksum = (known after apply) 2025-09-11 00:01:26.989217 | orchestrator | 00:01:26.989 STDOUT terraform:  + created_at = (known after apply) 2025-09-11 00:01:26.989277 | orchestrator | 00:01:26.989 STDOUT terraform:  + file = (known after apply) 2025-09-11 00:01:26.989335 | orchestrator | 00:01:26.989 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:26.989437 | orchestrator | 00:01:26.989 STDOUT terraform:  + metadata = (known after apply) 2025-09-11 00:01:26.989516 | orchestrator | 00:01:26.989 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-11 00:01:26.989540 | orchestrator | 00:01:26.989 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-11 00:01:26.989574 | orchestrator | 00:01:26.989 STDOUT terraform:  + most_recent = true 2025-09-11 00:01:26.989638 | orchestrator | 00:01:26.989 STDOUT terraform:  + name = (known after apply) 2025-09-11 00:01:26.989689 | orchestrator | 00:01:26.989 STDOUT terraform:  + protected = (known after apply) 2025-09-11 00:01:26.989738 | orchestrator | 00:01:26.989 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:26.989788 | orchestrator | 00:01:26.989 STDOUT terraform:  + schema = (known after apply) 2025-09-11 00:01:26.989836 | orchestrator | 00:01:26.989 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-11 00:01:26.989883 | orchestrator | 00:01:26.989 STDOUT terraform:  + tags = (known after apply) 2025-09-11 00:01:26.989929 | orchestrator | 00:01:26.989 STDOUT terraform:  + updated_at = (known after apply) 2025-09-11 00:01:26.989953 | orchestrator | 00:01:26.989 STDOUT terraform:  } 2025-09-11 00:01:26.990060 | orchestrator | 00:01:26.989 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-09-11 00:01:26.990106 | orchestrator | 00:01:26.990 STDOUT terraform:  # (config refers to values not yet known) 2025-09-11 00:01:26.990167 | orchestrator | 00:01:26.990 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-09-11 00:01:26.990215 | orchestrator | 00:01:26.990 STDOUT terraform:  + checksum = (known after apply) 2025-09-11 00:01:26.990261 | orchestrator | 00:01:26.990 STDOUT terraform:  + created_at = (known after apply) 2025-09-11 00:01:26.990312 | orchestrator | 00:01:26.990 STDOUT terraform:  + file = (known after apply) 2025-09-11 00:01:26.990361 | orchestrator | 00:01:26.990 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:26.990424 | orchestrator | 00:01:26.990 STDOUT terraform:  + metadata = (known after apply) 2025-09-11 00:01:26.990474 | orchestrator | 00:01:26.990 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-11 00:01:26.990521 | orchestrator | 00:01:26.990 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-11 00:01:26.990559 | orchestrator | 00:01:26.990 STDOUT terraform:  + most_recent = true 2025-09-11 00:01:26.990601 | orchestrator | 00:01:26.990 STDOUT terraform:  + name = (known after apply) 2025-09-11 00:01:26.990648 | orchestrator | 00:01:26.990 STDOUT terraform:  + protected = (known after apply) 2025-09-11 00:01:26.990695 | orchestrator | 00:01:26.990 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:26.990742 | orchestrator | 00:01:26.990 STDOUT terraform:  + schema = (known after apply) 2025-09-11 00:01:26.990787 | orchestrator | 00:01:26.990 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-11 00:01:26.990836 | orchestrator | 00:01:26.990 STDOUT terraform:  + tags = (known after apply) 2025-09-11 00:01:26.990881 | orchestrator | 00:01:26.990 STDOUT terraform:  + updated_at = (known after apply) 2025-09-11 00:01:26.990905 | orchestrator | 00:01:26.990 STDOUT terraform:  } 2025-09-11 00:01:26.991000 | orchestrator | 00:01:26.990 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-09-11 00:01:26.991047 | orchestrator | 00:01:26.990 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-09-11 00:01:26.991110 | orchestrator | 00:01:26.991 STDOUT terraform:  + content = (known after apply) 2025-09-11 00:01:26.991168 | orchestrator | 00:01:26.991 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-11 00:01:26.991227 | orchestrator | 00:01:26.991 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-11 00:01:26.991286 | orchestrator | 00:01:26.991 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-11 00:01:26.991344 | orchestrator | 00:01:26.991 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-11 00:01:26.991416 | orchestrator | 00:01:26.991 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-11 00:01:26.991475 | orchestrator | 00:01:26.991 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-11 00:01:26.991514 | orchestrator | 00:01:26.991 STDOUT terraform:  + directory_permission = "0777" 2025-09-11 00:01:26.991552 | orchestrator | 00:01:26.991 STDOUT terraform:  + file_permission = "0644" 2025-09-11 00:01:26.991611 | orchestrator | 00:01:26.991 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-09-11 00:01:26.991671 | orchestrator | 00:01:26.991 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:26.991695 | orchestrator | 00:01:26.991 STDOUT terraform:  } 2025-09-11 00:01:26.991741 | orchestrator | 00:01:26.991 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-09-11 00:01:26.991782 | orchestrator | 00:01:26.991 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-09-11 00:01:26.991844 | orchestrator | 00:01:26.991 STDOUT terraform:  + content = (known after apply) 2025-09-11 00:01:26.991902 | orchestrator | 00:01:26.991 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-11 00:01:26.991960 | orchestrator | 00:01:26.991 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-11 00:01:26.992018 | orchestrator | 00:01:26.991 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-11 00:01:26.992078 | orchestrator | 00:01:26.992 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-11 00:01:26.992135 | orchestrator | 00:01:26.992 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-11 00:01:26.992193 | orchestrator | 00:01:26.992 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-11 00:01:26.992231 | orchestrator | 00:01:26.992 STDOUT terraform:  + directory_permission = "0777" 2025-09-11 00:01:26.992271 | orchestrator | 00:01:26.992 STDOUT terraform:  + file_permission = "0644" 2025-09-11 00:01:26.992322 | orchestrator | 00:01:26.992 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-09-11 00:01:26.992398 | orchestrator | 00:01:26.992 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:26.992415 | orchestrator | 00:01:26.992 STDOUT terraform:  } 2025-09-11 00:01:26.992461 | orchestrator | 00:01:26.992 STDOUT terraform:  # local_file.inventory will be created 2025-09-11 00:01:26.992496 | orchestrator | 00:01:26.992 STDOUT terraform:  + resource "local_file" "inventory" { 2025-09-11 00:01:26.992556 | orchestrator | 00:01:26.992 STDOUT terraform:  + content = (known after apply) 2025-09-11 00:01:26.992613 | orchestrator | 00:01:26.992 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-11 00:01:26.992671 | orchestrator | 00:01:26.992 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-11 00:01:26.992729 | orchestrator | 00:01:26.992 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-11 00:01:26.992788 | orchestrator | 00:01:26.992 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-11 00:01:26.992846 | orchestrator | 00:01:26.992 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-11 00:01:26.992903 | orchestrator | 00:01:26.992 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-11 00:01:26.992956 | orchestrator | 00:01:26.992 STDOUT terraform:  + directory_permission = "0777" 2025-09-11 00:01:26.993018 | orchestrator | 00:01:26.992 STDOUT terraform:  + file_permission = "0644" 2025-09-11 00:01:26.993083 | orchestrator | 00:01:26.993 STDOUT terraform:  + filename = "inventory.ci" 2025-09-11 00:01:26.993192 | orchestrator | 00:01:26.993 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:26.993219 | orchestrator | 00:01:26.993 STDOUT terraform:  } 2025-09-11 00:01:26.993268 | orchestrator | 00:01:26.993 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-09-11 00:01:26.993319 | orchestrator | 00:01:26.993 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-09-11 00:01:26.993375 | orchestrator | 00:01:26.993 STDOUT terraform:  + content = (sensitive value) 2025-09-11 00:01:26.993449 | orchestrator | 00:01:26.993 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-11 00:01:26.993509 | orchestrator | 00:01:26.993 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-11 00:01:26.993569 | orchestrator | 00:01:26.993 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-11 00:01:26.993628 | orchestrator | 00:01:26.993 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-11 00:01:26.993687 | orchestrator | 00:01:26.993 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-11 00:01:26.993744 | orchestrator | 00:01:26.993 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-11 00:01:26.993797 | orchestrator | 00:01:26.993 STDOUT terraform:  + directory_permission = "0700" 2025-09-11 00:01:26.993841 | orchestrator | 00:01:26.993 STDOUT terraform:  + file_permission = "0600" 2025-09-11 00:01:26.993890 | orchestrator | 00:01:26.993 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-09-11 00:01:26.993949 | orchestrator | 00:01:26.993 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:26.993970 | orchestrator | 00:01:26.993 STDOUT terraform:  } 2025-09-11 00:01:26.994033 | orchestrator | 00:01:26.993 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-09-11 00:01:26.994120 | orchestrator | 00:01:26.994 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-09-11 00:01:26.994199 | orchestrator | 00:01:26.994 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:26.994241 | orchestrator | 00:01:26.994 STDOUT terraform:  } 2025-09-11 00:01:26.994347 | orchestrator | 00:01:26.994 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-09-11 00:01:26.994469 | orchestrator | 00:01:26.994 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-09-11 00:01:26.994527 | orchestrator | 00:01:26.994 STDOUT terraform:  + attachment = (known after apply) 2025-09-11 00:01:26.994570 | orchestrator | 00:01:26.994 STDOUT terraform:  + availability_zone = "nova" 2025-09-11 00:01:26.994630 | orchestrator | 00:01:26.994 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:26.994693 | orchestrator | 00:01:26.994 STDOUT terraform:  + image_id = (known after apply) 2025-09-11 00:01:26.994752 | orchestrator | 00:01:26.994 STDOUT terraform:  + metadata = (known after apply) 2025-09-11 00:01:26.994827 | orchestrator | 00:01:26.994 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-09-11 00:01:26.994885 | orchestrator | 00:01:26.994 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:26.994920 | orchestrator | 00:01:26.994 STDOUT terraform:  + size = 80 2025-09-11 00:01:26.994961 | orchestrator | 00:01:26.994 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-11 00:01:26.994997 | orchestrator | 00:01:26.994 STDOUT terraform:  + volume_type = "ssd" 2025-09-11 00:01:26.995020 | orchestrator | 00:01:26.994 STDOUT terraform:  } 2025-09-11 00:01:26.995094 | orchestrator | 00:01:26.995 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-09-11 00:01:26.995176 | orchestrator | 00:01:26.995 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-11 00:01:26.995220 | orchestrator | 00:01:26.995 STDOUT terraform:  + attachment = (known after apply) 2025-09-11 00:01:26.995257 | orchestrator | 00:01:26.995 STDOUT terraform:  + availability_zone = "nova" 2025-09-11 00:01:26.995315 | orchestrator | 00:01:26.995 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:26.995369 | orchestrator | 00:01:26.995 STDOUT terraform:  + image_id = (known after apply) 2025-09-11 00:01:26.995434 | orchestrator | 00:01:26.995 STDOUT terraform:  + metadata = (known after apply) 2025-09-11 00:01:26.995505 | orchestrator | 00:01:26.995 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-09-11 00:01:26.995560 | orchestrator | 00:01:26.995 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:26.995592 | orchestrator | 00:01:26.995 STDOUT terraform:  + size = 80 2025-09-11 00:01:26.995629 | orchestrator | 00:01:26.995 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-11 00:01:26.995666 | orchestrator | 00:01:26.995 STDOUT terraform:  + volume_type = "ssd" 2025-09-11 00:01:26.995686 | orchestrator | 00:01:26.995 STDOUT terraform:  } 2025-09-11 00:01:26.995759 | orchestrator | 00:01:26.995 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-09-11 00:01:26.995830 | orchestrator | 00:01:26.995 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-11 00:01:26.995886 | orchestrator | 00:01:26.995 STDOUT terraform:  + attachment = (known after apply) 2025-09-11 00:01:26.995924 | orchestrator | 00:01:26.995 STDOUT terraform:  + availability_zone = "nova" 2025-09-11 00:01:26.995979 | orchestrator | 00:01:26.995 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:26.996034 | orchestrator | 00:01:26.995 STDOUT terraform:  + image_id = (known after apply) 2025-09-11 00:01:26.996088 | orchestrator | 00:01:26.996 STDOUT terraform:  + metadata = (known after apply) 2025-09-11 00:01:26.996157 | orchestrator | 00:01:26.996 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-09-11 00:01:26.996211 | orchestrator | 00:01:26.996 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:26.996243 | orchestrator | 00:01:26.996 STDOUT terraform:  + size = 80 2025-09-11 00:01:26.996280 | orchestrator | 00:01:26.996 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-11 00:01:26.996316 | orchestrator | 00:01:26.996 STDOUT terraform:  + volume_type = "ssd" 2025-09-11 00:01:26.996337 | orchestrator | 00:01:26.996 STDOUT terraform:  } 2025-09-11 00:01:26.996421 | orchestrator | 00:01:26.996 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-09-11 00:01:26.996492 | orchestrator | 00:01:26.996 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-11 00:01:26.996546 | orchestrator | 00:01:26.996 STDOUT terraform:  + attachment = (known after apply) 2025-09-11 00:01:26.996590 | orchestrator | 00:01:26.996 STDOUT terraform:  + availability_zone = "nova" 2025-09-11 00:01:26.996648 | orchestrator | 00:01:26.996 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:26.996702 | orchestrator | 00:01:26.996 STDOUT terraform:  + image_id = (known after apply) 2025-09-11 00:01:26.996758 | orchestrator | 00:01:26.996 STDOUT terraform:  + metadata = (known after apply) 2025-09-11 00:01:26.996829 | orchestrator | 00:01:26.996 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-09-11 00:01:26.996885 | orchestrator | 00:01:26.996 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:26.996920 | orchestrator | 00:01:26.996 STDOUT terraform:  + size = 80 2025-09-11 00:01:26.996956 | orchestrator | 00:01:26.996 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-11 00:01:26.996993 | orchestrator | 00:01:26.996 STDOUT terraform:  + volume_type = "ssd" 2025-09-11 00:01:26.997013 | orchestrator | 00:01:26.996 STDOUT terraform:  } 2025-09-11 00:01:26.997086 | orchestrator | 00:01:26.997 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-09-11 00:01:26.997155 | orchestrator | 00:01:26.997 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-11 00:01:26.997212 | orchestrator | 00:01:26.997 STDOUT terraform:  + attachment = (known after apply) 2025-09-11 00:01:26.997249 | orchestrator | 00:01:26.997 STDOUT terraform:  + availability_zone = "nova" 2025-09-11 00:01:26.997305 | orchestrator | 00:01:26.997 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:26.997360 | orchestrator | 00:01:26.997 STDOUT terraform:  + image_id = (known after apply) 2025-09-11 00:01:26.997460 | orchestrator | 00:01:26.997 STDOUT terraform:  + metadata = (known after apply) 2025-09-11 00:01:26.997543 | orchestrator | 00:01:26.997 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-09-11 00:01:26.997602 | orchestrator | 00:01:26.997 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:26.997633 | orchestrator | 00:01:26.997 STDOUT terraform:  + size = 80 2025-09-11 00:01:26.997670 | orchestrator | 00:01:26.997 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-11 00:01:26.997707 | orchestrator | 00:01:26.997 STDOUT terraform:  + volume_type = "ssd" 2025-09-11 00:01:26.997729 | orchestrator | 00:01:26.997 STDOUT terraform:  } 2025-09-11 00:01:26.997801 | orchestrator | 00:01:26.997 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-09-11 00:01:26.997879 | orchestrator | 00:01:26.997 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-11 00:01:26.997926 | orchestrator | 00:01:26.997 STDOUT terraform:  + attachment = (known after apply) 2025-09-11 00:01:26.997966 | orchestrator | 00:01:26.997 STDOUT terraform:  + availability_zone = "nova" 2025-09-11 00:01:26.998044 | orchestrator | 00:01:26.997 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:26.998098 | orchestrator | 00:01:26.998 STDOUT terraform:  + image_id = (known after apply) 2025-09-11 00:01:26.998154 | orchestrator | 00:01:26.998 STDOUT terraform:  + metadata = (known after apply) 2025-09-11 00:01:26.998225 | orchestrator | 00:01:26.998 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-09-11 00:01:26.998281 | orchestrator | 00:01:26.998 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:26.998313 | orchestrator | 00:01:26.998 STDOUT terraform:  + size = 80 2025-09-11 00:01:26.998351 | orchestrator | 00:01:26.998 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-11 00:01:26.998420 | orchestrator | 00:01:26.998 STDOUT terraform:  + volume_type = "ssd" 2025-09-11 00:01:26.998441 | orchestrator | 00:01:26.998 STDOUT terraform:  } 2025-09-11 00:01:26.998513 | orchestrator | 00:01:26.998 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-09-11 00:01:26.998590 | orchestrator | 00:01:26.998 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-11 00:01:26.998646 | orchestrator | 00:01:26.998 STDOUT terraform:  + attachment = (known after apply) 2025-09-11 00:01:26.998681 | orchestrator | 00:01:26.998 STDOUT terraform:  + availability_zone = "nova" 2025-09-11 00:01:26.998737 | orchestrator | 00:01:26.998 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:26.998792 | orchestrator | 00:01:26.998 STDOUT terraform:  + image_id = (known after apply) 2025-09-11 00:01:26.998848 | orchestrator | 00:01:26.998 STDOUT terraform:  + metadata = (known after apply) 2025-09-11 00:01:26.998917 | orchestrator | 00:01:26.998 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-09-11 00:01:26.998971 | orchestrator | 00:01:26.998 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:26.998998 | orchestrator | 00:01:26.998 STDOUT terraform:  + size = 80 2025-09-11 00:01:26.999031 | orchestrator | 00:01:26.998 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-11 00:01:26.999065 | orchestrator | 00:01:26.999 STDOUT terraform:  + volume_type = "ssd" 2025-09-11 00:01:26.999083 | orchestrator | 00:01:26.999 STDOUT terraform:  } 2025-09-11 00:01:26.999142 | orchestrator | 00:01:26.999 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-09-11 00:01:26.999202 | orchestrator | 00:01:26.999 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-11 00:01:26.999251 | orchestrator | 00:01:26.999 STDOUT terraform:  + attachment = (known after apply) 2025-09-11 00:01:26.999281 | orchestrator | 00:01:26.999 STDOUT terraform:  + availability_zone = "nova" 2025-09-11 00:01:26.999331 | orchestrator | 00:01:26.999 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:26.999395 | orchestrator | 00:01:26.999 STDOUT terraform:  + metadata = (known after apply) 2025-09-11 00:01:26.999444 | orchestrator | 00:01:26.999 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-09-11 00:01:26.999493 | orchestrator | 00:01:26.999 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:26.999521 | orchestrator | 00:01:26.999 STDOUT terraform:  + size = 20 2025-09-11 00:01:26.999556 | orchestrator | 00:01:26.999 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-11 00:01:26.999587 | orchestrator | 00:01:26.999 STDOUT terraform:  + volume_type = "ssd" 2025-09-11 00:01:26.999604 | orchestrator | 00:01:26.999 STDOUT terraform:  } 2025-09-11 00:01:26.999667 | orchestrator | 00:01:26.999 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-09-11 00:01:26.999726 | orchestrator | 00:01:26.999 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-11 00:01:26.999774 | orchestrator | 00:01:26.999 STDOUT terraform:  + attachment = (known after apply) 2025-09-11 00:01:26.999806 | orchestrator | 00:01:26.999 STDOUT terraform:  + availability_zone = "nova" 2025-09-11 00:01:26.999855 | orchestrator | 00:01:26.999 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:26.999902 | orchestrator | 00:01:26.999 STDOUT terraform:  + metadata = (known after apply) 2025-09-11 00:01:26.999954 | orchestrator | 00:01:26.999 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-09-11 00:01:27.000033 | orchestrator | 00:01:26.999 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.000064 | orchestrator | 00:01:27.000 STDOUT terraform:  + size = 20 2025-09-11 00:01:27.000097 | orchestrator | 00:01:27.000 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-11 00:01:27.000131 | orchestrator | 00:01:27.000 STDOUT terraform:  + volume_type = "ssd" 2025-09-11 00:01:27.000150 | orchestrator | 00:01:27.000 STDOUT terraform:  } 2025-09-11 00:01:27.000210 | orchestrator | 00:01:27.000 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-09-11 00:01:27.000270 | orchestrator | 00:01:27.000 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-11 00:01:27.000317 | orchestrator | 00:01:27.000 STDOUT terraform:  + attachment = (known after apply) 2025-09-11 00:01:27.000350 | orchestrator | 00:01:27.000 STDOUT terraform:  + availability_zone = "nova" 2025-09-11 00:01:27.000412 | orchestrator | 00:01:27.000 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.000457 | orchestrator | 00:01:27.000 STDOUT terraform:  + metadata = (known after apply) 2025-09-11 00:01:27.000511 | orchestrator | 00:01:27.000 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-09-11 00:01:27.000562 | orchestrator | 00:01:27.000 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.000591 | orchestrator | 00:01:27.000 STDOUT terraform:  + size = 20 2025-09-11 00:01:27.000631 | orchestrator | 00:01:27.000 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-11 00:01:27.000659 | orchestrator | 00:01:27.000 STDOUT terraform:  + volume_type = "ssd" 2025-09-11 00:01:27.000678 | orchestrator | 00:01:27.000 STDOUT terraform:  } 2025-09-11 00:01:27.000737 | orchestrator | 00:01:27.000 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-09-11 00:01:27.000796 | orchestrator | 00:01:27.000 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-11 00:01:27.000844 | orchestrator | 00:01:27.000 STDOUT terraform:  + attachment = (known after apply) 2025-09-11 00:01:27.000877 | orchestrator | 00:01:27.000 STDOUT terraform:  + availability_zone = "nova" 2025-09-11 00:01:27.000924 | orchestrator | 00:01:27.000 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.000972 | orchestrator | 00:01:27.000 STDOUT terraform:  + metadata = (known after apply) 2025-09-11 00:01:27.001024 | orchestrator | 00:01:27.000 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-09-11 00:01:27.001072 | orchestrator | 00:01:27.001 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.001101 | orchestrator | 00:01:27.001 STDOUT terraform:  + size = 20 2025-09-11 00:01:27.001133 | orchestrator | 00:01:27.001 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-11 00:01:27.001165 | orchestrator | 00:01:27.001 STDOUT terraform:  + volume_type = "ssd" 2025-09-11 00:01:27.001183 | orchestrator | 00:01:27.001 STDOUT terraform:  } 2025-09-11 00:01:27.001244 | orchestrator | 00:01:27.001 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-09-11 00:01:27.001302 | orchestrator | 00:01:27.001 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-11 00:01:27.001351 | orchestrator | 00:01:27.001 STDOUT terraform:  + attachment = (known after apply) 2025-09-11 00:01:27.001396 | orchestrator | 00:01:27.001 STDOUT terraform:  + availability_zone = "nova" 2025-09-11 00:01:27.001439 | orchestrator | 00:01:27.001 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.001486 | orchestrator | 00:01:27.001 STDOUT terraform:  + metadata = (known after apply) 2025-09-11 00:01:27.001538 | orchestrator | 00:01:27.001 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-09-11 00:01:27.001586 | orchestrator | 00:01:27.001 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.001613 | orchestrator | 00:01:27.001 STDOUT terraform:  + size = 20 2025-09-11 00:01:27.001646 | orchestrator | 00:01:27.001 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-11 00:01:27.001678 | orchestrator | 00:01:27.001 STDOUT terraform:  + volume_type = "ssd" 2025-09-11 00:01:27.001697 | orchestrator | 00:01:27.001 STDOUT terraform:  } 2025-09-11 00:01:27.001760 | orchestrator | 00:01:27.001 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-09-11 00:01:27.001819 | orchestrator | 00:01:27.001 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-11 00:01:27.001867 | orchestrator | 00:01:27.001 STDOUT terraform:  + attachment = (known after apply) 2025-09-11 00:01:27.001901 | orchestrator | 00:01:27.001 STDOUT terraform:  + availability_zone = "nova" 2025-09-11 00:01:27.001957 | orchestrator | 00:01:27.001 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.002005 | orchestrator | 00:01:27.001 STDOUT terraform:  + metadata = (known after apply) 2025-09-11 00:01:27.002115 | orchestrator | 00:01:27.002 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-09-11 00:01:27.002177 | orchestrator | 00:01:27.002 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.002209 | orchestrator | 00:01:27.002 STDOUT terraform:  + size = 20 2025-09-11 00:01:27.002237 | orchestrator | 00:01:27.002 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-11 00:01:27.002270 | orchestrator | 00:01:27.002 STDOUT terraform:  + volume_type = "ssd" 2025-09-11 00:01:27.002287 | orchestrator | 00:01:27.002 STDOUT terraform:  } 2025-09-11 00:01:27.002347 | orchestrator | 00:01:27.002 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-09-11 00:01:27.002434 | orchestrator | 00:01:27.002 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-11 00:01:27.002482 | orchestrator | 00:01:27.002 STDOUT terraform:  + attachment = (known after apply) 2025-09-11 00:01:27.002515 | orchestrator | 00:01:27.002 STDOUT terraform:  + availability_zone = "nova" 2025-09-11 00:01:27.002564 | orchestrator | 00:01:27.002 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.002614 | orchestrator | 00:01:27.002 STDOUT terraform:  + metadata = (known after apply) 2025-09-11 00:01:27.002668 | orchestrator | 00:01:27.002 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-09-11 00:01:27.002716 | orchestrator | 00:01:27.002 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.002743 | orchestrator | 00:01:27.002 STDOUT terraform:  + size = 20 2025-09-11 00:01:27.002772 | orchestrator | 00:01:27.002 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-11 00:01:27.002804 | orchestrator | 00:01:27.002 STDOUT terraform:  + volume_type = "ssd" 2025-09-11 00:01:27.002813 | orchestrator | 00:01:27.002 STDOUT terraform:  } 2025-09-11 00:01:27.002873 | orchestrator | 00:01:27.002 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-09-11 00:01:27.002923 | orchestrator | 00:01:27.002 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-11 00:01:27.002966 | orchestrator | 00:01:27.002 STDOUT terraform:  + attachment = (known after apply) 2025-09-11 00:01:27.002994 | orchestrator | 00:01:27.002 STDOUT terraform:  + availability_zone = "nova" 2025-09-11 00:01:27.003037 | orchestrator | 00:01:27.002 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.003082 | orchestrator | 00:01:27.003 STDOUT terraform:  + metadata = (known after apply) 2025-09-11 00:01:27.003129 | orchestrator | 00:01:27.003 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-09-11 00:01:27.003173 | orchestrator | 00:01:27.003 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.003198 | orchestrator | 00:01:27.003 STDOUT terraform:  + size = 20 2025-09-11 00:01:27.003228 | orchestrator | 00:01:27.003 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-11 00:01:27.003257 | orchestrator | 00:01:27.003 STDOUT terraform:  + volume_type = "ssd" 2025-09-11 00:01:27.003279 | orchestrator | 00:01:27.003 STDOUT terraform:  } 2025-09-11 00:01:27.003368 | orchestrator | 00:01:27.003 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-09-11 00:01:27.003441 | orchestrator | 00:01:27.003 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-11 00:01:27.003485 | orchestrator | 00:01:27.003 STDOUT terraform:  + attachment = (known after apply) 2025-09-11 00:01:27.003515 | orchestrator | 00:01:27.003 STDOUT terraform:  + availability_zone = "nova" 2025-09-11 00:01:27.003559 | orchestrator | 00:01:27.003 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.003603 | orchestrator | 00:01:27.003 STDOUT terraform:  + metadata = (known after apply) 2025-09-11 00:01:27.003650 | orchestrator | 00:01:27.003 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-09-11 00:01:27.003698 | orchestrator | 00:01:27.003 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.003723 | orchestrator | 00:01:27.003 STDOUT terraform:  + size = 20 2025-09-11 00:01:27.003753 | orchestrator | 00:01:27.003 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-11 00:01:27.003784 | orchestrator | 00:01:27.003 STDOUT terraform:  + volume_type = "ssd" 2025-09-11 00:01:27.003800 | orchestrator | 00:01:27.003 STDOUT terraform:  } 2025-09-11 00:01:27.003862 | orchestrator | 00:01:27.003 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-09-11 00:01:27.003908 | orchestrator | 00:01:27.003 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-09-11 00:01:27.003950 | orchestrator | 00:01:27.003 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-11 00:01:27.003992 | orchestrator | 00:01:27.003 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-11 00:01:27.004034 | orchestrator | 00:01:27.003 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-11 00:01:27.004080 | orchestrator | 00:01:27.004 STDOUT terraform:  + all_tags = (known after apply) 2025-09-11 00:01:27.004108 | orchestrator | 00:01:27.004 STDOUT terraform:  + availability_zone = "nova" 2025-09-11 00:01:27.004127 | orchestrator | 00:01:27.004 STDOUT terraform:  + config_drive = true 2025-09-11 00:01:27.004170 | orchestrator | 00:01:27.004 STDOUT terraform:  + created = (known after apply) 2025-09-11 00:01:27.004213 | orchestrator | 00:01:27.004 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-11 00:01:27.004249 | orchestrator | 00:01:27.004 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-09-11 00:01:27.004276 | orchestrator | 00:01:27.004 STDOUT terraform:  + force_delete = false 2025-09-11 00:01:27.004317 | orchestrator | 00:01:27.004 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-11 00:01:27.004360 | orchestrator | 00:01:27.004 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.004453 | orchestrator | 00:01:27.004 STDOUT terraform:  + image_id = (known after apply) 2025-09-11 00:01:27.004507 | orchestrator | 00:01:27.004 STDOUT terraform:  + image_name = (known after apply) 2025-09-11 00:01:27.004540 | orchestrator | 00:01:27.004 STDOUT terraform:  + key_pair = "testbed" 2025-09-11 00:01:27.004579 | orchestrator | 00:01:27.004 STDOUT terraform:  + name = "testbed-manager" 2025-09-11 00:01:27.004610 | orchestrator | 00:01:27.004 STDOUT terraform:  + power_state = "active" 2025-09-11 00:01:27.004654 | orchestrator | 00:01:27.004 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.004697 | orchestrator | 00:01:27.004 STDOUT terraform:  + security_groups = (known after apply) 2025-09-11 00:01:27.004725 | orchestrator | 00:01:27.004 STDOUT terraform:  + stop_before_destroy = false 2025-09-11 00:01:27.004765 | orchestrator | 00:01:27.004 STDOUT terraform:  + updated = (known after apply) 2025-09-11 00:01:27.004801 | orchestrator | 00:01:27.004 STDOUT terraform:  + user_data = (sensitive value) 2025-09-11 00:01:27.004823 | orchestrator | 00:01:27.004 STDOUT terraform:  + block_device { 2025-09-11 00:01:27.004851 | orchestrator | 00:01:27.004 STDOUT terraform:  + boot_index = 0 2025-09-11 00:01:27.004883 | orchestrator | 00:01:27.004 STDOUT terraform:  + delete_on_termination = false 2025-09-11 00:01:27.004915 | orchestrator | 00:01:27.004 STDOUT terraform:  + destination_type = "volume" 2025-09-11 00:01:27.004948 | orchestrator | 00:01:27.004 STDOUT terraform:  + multiattach = false 2025-09-11 00:01:27.004982 | orchestrator | 00:01:27.004 STDOUT terraform:  + source_type = "volume" 2025-09-11 00:01:27.005025 | orchestrator | 00:01:27.004 STDOUT terraform:  + uuid = (known after apply) 2025-09-11 00:01:27.005042 | orchestrator | 00:01:27.005 STDOUT terraform:  } 2025-09-11 00:01:27.005059 | orchestrator | 00:01:27.005 STDOUT terraform:  + network { 2025-09-11 00:01:27.005084 | orchestrator | 00:01:27.005 STDOUT terraform:  + access_network = false 2025-09-11 00:01:27.005120 | orchestrator | 00:01:27.005 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-11 00:01:27.005156 | orchestrator | 00:01:27.005 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-11 00:01:27.005192 | orchestrator | 00:01:27.005 STDOUT terraform:  + mac = (known after apply) 2025-09-11 00:01:27.005238 | orchestrator | 00:01:27.005 STDOUT terraform:  + name = (known after apply) 2025-09-11 00:01:27.005273 | orchestrator | 00:01:27.005 STDOUT terraform:  + port = (known after apply) 2025-09-11 00:01:27.005309 | orchestrator | 00:01:27.005 STDOUT terraform:  + uuid = (known after apply) 2025-09-11 00:01:27.005325 | orchestrator | 00:01:27.005 STDOUT terraform:  } 2025-09-11 00:01:27.005341 | orchestrator | 00:01:27.005 STDOUT terraform:  } 2025-09-11 00:01:27.005504 | orchestrator | 00:01:27.005 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-09-11 00:01:27.005552 | orchestrator | 00:01:27.005 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-11 00:01:27.005594 | orchestrator | 00:01:27.005 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-11 00:01:27.005640 | orchestrator | 00:01:27.005 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-11 00:01:27.005674 | orchestrator | 00:01:27.005 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-11 00:01:27.005714 | orchestrator | 00:01:27.005 STDOUT terraform:  + all_tags = (known after apply) 2025-09-11 00:01:27.005741 | orchestrator | 00:01:27.005 STDOUT terraform:  + availability_zone = "nova" 2025-09-11 00:01:27.005767 | orchestrator | 00:01:27.005 STDOUT terraform:  + config_drive = true 2025-09-11 00:01:27.005808 | orchestrator | 00:01:27.005 STDOUT terraform:  + created = (known after apply) 2025-09-11 00:01:27.005850 | orchestrator | 00:01:27.005 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-11 00:01:27.005884 | orchestrator | 00:01:27.005 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-11 00:01:27.005911 | orchestrator | 00:01:27.005 STDOUT terraform:  + force_delete = false 2025-09-11 00:01:27.005951 | orchestrator | 00:01:27.005 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-11 00:01:27.005993 | orchestrator | 00:01:27.005 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.006081 | orchestrator | 00:01:27.005 STDOUT terraform:  + image_id = (known after apply) 2025-09-11 00:01:27.006134 | orchestrator | 00:01:27.006 STDOUT terraform:  + image_name = (known after apply) 2025-09-11 00:01:27.006162 | orchestrator | 00:01:27.006 STDOUT terraform:  + key_pair = "testbed" 2025-09-11 00:01:27.006198 | orchestrator | 00:01:27.006 STDOUT terraform:  + name = "testbed-node-0" 2025-09-11 00:01:27.006226 | orchestrator | 00:01:27.006 STDOUT terraform:  + power_state = "active" 2025-09-11 00:01:27.006266 | orchestrator | 00:01:27.006 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.006308 | orchestrator | 00:01:27.006 STDOUT terraform:  + security_groups = (known after apply) 2025-09-11 00:01:27.006335 | orchestrator | 00:01:27.006 STDOUT terraform:  + stop_before_destroy = false 2025-09-11 00:01:27.006409 | orchestrator | 00:01:27.006 STDOUT terraform:  + updated = (known after apply) 2025-09-11 00:01:27.006449 | orchestrator | 00:01:27.006 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-11 00:01:27.006467 | orchestrator | 00:01:27.006 STDOUT terraform:  + block_device { 2025-09-11 00:01:27.006495 | orchestrator | 00:01:27.006 STDOUT terraform:  + boot_index = 0 2025-09-11 00:01:27.006530 | orchestrator | 00:01:27.006 STDOUT terraform:  + delete_on_termination = false 2025-09-11 00:01:27.006564 | orchestrator | 00:01:27.006 STDOUT terraform:  + destination_type = "volume" 2025-09-11 00:01:27.006595 | orchestrator | 00:01:27.006 STDOUT terraform:  + multiattach = false 2025-09-11 00:01:27.006630 | orchestrator | 00:01:27.006 STDOUT terraform:  + source_type = "volume" 2025-09-11 00:01:27.006675 | orchestrator | 00:01:27.006 STDOUT terraform:  + uuid = (known after apply) 2025-09-11 00:01:27.006692 | orchestrator | 00:01:27.006 STDOUT terraform:  } 2025-09-11 00:01:27.006710 | orchestrator | 00:01:27.006 STDOUT terraform:  + network { 2025-09-11 00:01:27.006734 | orchestrator | 00:01:27.006 STDOUT terraform:  + access_network = false 2025-09-11 00:01:27.006769 | orchestrator | 00:01:27.006 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-11 00:01:27.006805 | orchestrator | 00:01:27.006 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-11 00:01:27.006840 | orchestrator | 00:01:27.006 STDOUT terraform:  + mac = (known after apply) 2025-09-11 00:01:27.006876 | orchestrator | 00:01:27.006 STDOUT terraform:  + name = (known after apply) 2025-09-11 00:01:27.006912 | orchestrator | 00:01:27.006 STDOUT terraform:  + port = (known after apply) 2025-09-11 00:01:27.006948 | orchestrator | 00:01:27.006 STDOUT terraform:  + uuid = (known after apply) 2025-09-11 00:01:27.006964 | orchestrator | 00:01:27.006 STDOUT terraform:  } 2025-09-11 00:01:27.006980 | orchestrator | 00:01:27.006 STDOUT terraform:  } 2025-09-11 00:01:27.007031 | orchestrator | 00:01:27.006 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-09-11 00:01:27.007079 | orchestrator | 00:01:27.007 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-11 00:01:27.007118 | orchestrator | 00:01:27.007 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-11 00:01:27.007158 | orchestrator | 00:01:27.007 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-11 00:01:27.007198 | orchestrator | 00:01:27.007 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-11 00:01:27.007239 | orchestrator | 00:01:27.007 STDOUT terraform:  + all_tags = (known after apply) 2025-09-11 00:01:27.007267 | orchestrator | 00:01:27.007 STDOUT terraform:  + availability_zone = "nova" 2025-09-11 00:01:27.007291 | orchestrator | 00:01:27.007 STDOUT terraform:  + config_drive = true 2025-09-11 00:01:27.007332 | orchestrator | 00:01:27.007 STDOUT terraform:  + created = (known after apply) 2025-09-11 00:01:27.007372 | orchestrator | 00:01:27.007 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-11 00:01:27.007415 | orchestrator | 00:01:27.007 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-11 00:01:27.007441 | orchestrator | 00:01:27.007 STDOUT terraform:  + force_delete = false 2025-09-11 00:01:27.007488 | orchestrator | 00:01:27.007 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-11 00:01:27.007524 | orchestrator | 00:01:27.007 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.007564 | orchestrator | 00:01:27.007 STDOUT terraform:  + image_id = (known after apply) 2025-09-11 00:01:27.007604 | orchestrator | 00:01:27.007 STDOUT terraform:  + image_name = (known after apply) 2025-09-11 00:01:27.007633 | orchestrator | 00:01:27.007 STDOUT terraform:  + key_pair = "testbed" 2025-09-11 00:01:27.007673 | orchestrator | 00:01:27.007 STDOUT terraform:  + name = "testbed-node-1" 2025-09-11 00:01:27.007701 | orchestrator | 00:01:27.007 STDOUT terraform:  + power_state = "active" 2025-09-11 00:01:27.007743 | orchestrator | 00:01:27.007 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.007782 | orchestrator | 00:01:27.007 STDOUT terraform:  + security_groups = (known after apply) 2025-09-11 00:01:27.007808 | orchestrator | 00:01:27.007 STDOUT terraform:  + stop_before_destroy = false 2025-09-11 00:01:27.007848 | orchestrator | 00:01:27.007 STDOUT terraform:  + updated = (known after apply) 2025-09-11 00:01:27.007905 | orchestrator | 00:01:27.007 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-11 00:01:27.007925 | orchestrator | 00:01:27.007 STDOUT terraform:  + block_device { 2025-09-11 00:01:27.007954 | orchestrator | 00:01:27.007 STDOUT terraform:  + boot_index = 0 2025-09-11 00:01:27.007989 | orchestrator | 00:01:27.007 STDOUT terraform:  + delete_on_termination = false 2025-09-11 00:01:27.008018 | orchestrator | 00:01:27.007 STDOUT terraform:  + destination_type = "volume" 2025-09-11 00:01:27.008051 | orchestrator | 00:01:27.008 STDOUT terraform:  + multiattach = false 2025-09-11 00:01:27.008087 | orchestrator | 00:01:27.008 STDOUT terraform:  + source_type = "volume" 2025-09-11 00:01:27.008132 | orchestrator | 00:01:27.008 STDOUT terraform:  + uuid = (known after apply) 2025-09-11 00:01:27.008143 | orchestrator | 00:01:27.008 STDOUT terraform:  } 2025-09-11 00:01:27.008152 | orchestrator | 00:01:27.008 STDOUT terraform:  + network { 2025-09-11 00:01:27.008178 | orchestrator | 00:01:27.008 STDOUT terraform:  + access_network = false 2025-09-11 00:01:27.008214 | orchestrator | 00:01:27.008 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-11 00:01:27.008249 | orchestrator | 00:01:27.008 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-11 00:01:27.008285 | orchestrator | 00:01:27.008 STDOUT terraform:  + mac = (known after apply) 2025-09-11 00:01:27.008322 | orchestrator | 00:01:27.008 STDOUT terraform:  + name = (known after apply) 2025-09-11 00:01:27.008358 | orchestrator | 00:01:27.008 STDOUT terraform:  + port = (known after apply) 2025-09-11 00:01:27.008421 | orchestrator | 00:01:27.008 STDOUT terraform:  + uuid = (known after apply) 2025-09-11 00:01:27.008437 | orchestrator | 00:01:27.008 STDOUT terraform:  } 2025-09-11 00:01:27.008454 | orchestrator | 00:01:27.008 STDOUT terraform:  } 2025-09-11 00:01:27.008505 | orchestrator | 00:01:27.008 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-09-11 00:01:27.008556 | orchestrator | 00:01:27.008 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-11 00:01:27.008592 | orchestrator | 00:01:27.008 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-11 00:01:27.008628 | orchestrator | 00:01:27.008 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-11 00:01:27.008665 | orchestrator | 00:01:27.008 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-11 00:01:27.008703 | orchestrator | 00:01:27.008 STDOUT terraform:  + all_tags = (known after apply) 2025-09-11 00:01:27.008730 | orchestrator | 00:01:27.008 STDOUT terraform:  + availability_zone = "nova" 2025-09-11 00:01:27.014077 | orchestrator | 00:01:27.008 STDOUT terraform:  + config_drive = true 2025-09-11 00:01:27.014110 | orchestrator | 00:01:27.008 STDOUT terraform:  + created = (known after apply) 2025-09-11 00:01:27.014116 | orchestrator | 00:01:27.008 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-11 00:01:27.014120 | orchestrator | 00:01:27.008 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-11 00:01:27.014124 | orchestrator | 00:01:27.008 STDOUT terraform:  + force_delete = false 2025-09-11 00:01:27.014128 | orchestrator | 00:01:27.008 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-11 00:01:27.014132 | orchestrator | 00:01:27.008 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.014136 | orchestrator | 00:01:27.008 STDOUT terraform:  + image_id = (known after apply) 2025-09-11 00:01:27.014139 | orchestrator | 00:01:27.008 STDOUT terraform:  + image_name = (known after apply) 2025-09-11 00:01:27.014143 | orchestrator | 00:01:27.008 STDOUT terraform:  + key_pair = "testbed" 2025-09-11 00:01:27.014147 | orchestrator | 00:01:27.009 STDOUT terraform:  + name = "testbed-node-2" 2025-09-11 00:01:27.014151 | orchestrator | 00:01:27.009 STDOUT terraform:  + power_state = "active" 2025-09-11 00:01:27.014154 | orchestrator | 00:01:27.009 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.014158 | orchestrator | 00:01:27.009 STDOUT terraform:  + security_groups = (known after apply) 2025-09-11 00:01:27.014162 | orchestrator | 00:01:27.009 STDOUT terraform:  + stop_before_destroy = false 2025-09-11 00:01:27.014166 | orchestrator | 00:01:27.009 STDOUT terraform:  + updated = (known after apply) 2025-09-11 00:01:27.014170 | orchestrator | 00:01:27.009 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-11 00:01:27.014174 | orchestrator | 00:01:27.009 STDOUT terraform:  + block_device { 2025-09-11 00:01:27.014178 | orchestrator | 00:01:27.009 STDOUT terraform:  + boot_index = 0 2025-09-11 00:01:27.014182 | orchestrator | 00:01:27.009 STDOUT terraform:  + delete_on_termination = false 2025-09-11 00:01:27.014191 | orchestrator | 00:01:27.009 STDOUT terraform:  + destination_type = "volume" 2025-09-11 00:01:27.014195 | orchestrator | 00:01:27.009 STDOUT terraform:  + multiattach = false 2025-09-11 00:01:27.014198 | orchestrator | 00:01:27.009 STDOUT terraform:  + source_type = "volume" 2025-09-11 00:01:27.014202 | orchestrator | 00:01:27.009 STDOUT terraform:  + uuid = (known after apply) 2025-09-11 00:01:27.014213 | orchestrator | 00:01:27.009 STDOUT terraform:  } 2025-09-11 00:01:27.014217 | orchestrator | 00:01:27.009 STDOUT terraform:  + network { 2025-09-11 00:01:27.014220 | orchestrator | 00:01:27.009 STDOUT terraform:  + access_network = false 2025-09-11 00:01:27.014224 | orchestrator | 00:01:27.009 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-11 00:01:27.014228 | orchestrator | 00:01:27.009 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-11 00:01:27.014231 | orchestrator | 00:01:27.009 STDOUT terraform:  + mac = (known after apply) 2025-09-11 00:01:27.014235 | orchestrator | 00:01:27.009 STDOUT terraform:  + name = (known after apply) 2025-09-11 00:01:27.014239 | orchestrator | 00:01:27.009 STDOUT terraform:  + port = (known after apply) 2025-09-11 00:01:27.014243 | orchestrator | 00:01:27.009 STDOUT terraform:  + uuid = (known after apply) 2025-09-11 00:01:27.014246 | orchestrator | 00:01:27.009 STDOUT terraform:  } 2025-09-11 00:01:27.014252 | orchestrator | 00:01:27.009 STDOUT terraform:  } 2025-09-11 00:01:27.014255 | orchestrator | 00:01:27.009 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-09-11 00:01:27.014259 | orchestrator | 00:01:27.009 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-11 00:01:27.014263 | orchestrator | 00:01:27.009 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-11 00:01:27.014275 | orchestrator | 00:01:27.009 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-11 00:01:27.014279 | orchestrator | 00:01:27.009 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-11 00:01:27.014282 | orchestrator | 00:01:27.009 STDOUT terraform:  + all_tags = (known after apply) 2025-09-11 00:01:27.014286 | orchestrator | 00:01:27.009 STDOUT terraform:  + availability_zone = "nova" 2025-09-11 00:01:27.014290 | orchestrator | 00:01:27.009 STDOUT terraform:  + config_drive = true 2025-09-11 00:01:27.014294 | orchestrator | 00:01:27.009 STDOUT terraform:  + created = (known after apply) 2025-09-11 00:01:27.014297 | orchestrator | 00:01:27.009 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-11 00:01:27.014301 | orchestrator | 00:01:27.009 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-11 00:01:27.014305 | orchestrator | 00:01:27.009 STDOUT terraform:  + force_delete = false 2025-09-11 00:01:27.014308 | orchestrator | 00:01:27.009 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-11 00:01:27.014312 | orchestrator | 00:01:27.010 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.014316 | orchestrator | 00:01:27.010 STDOUT terraform:  + image_id = (known after apply) 2025-09-11 00:01:27.014320 | orchestrator | 00:01:27.010 STDOUT terraform:  + image_name = (known after apply) 2025-09-11 00:01:27.014323 | orchestrator | 00:01:27.010 STDOUT terraform:  + key_pair = "testbed" 2025-09-11 00:01:27.014327 | orchestrator | 00:01:27.010 STDOUT terraform:  + name = "testbed-node-3" 2025-09-11 00:01:27.014331 | orchestrator | 00:01:27.010 STDOUT terraform:  + power_state = "active" 2025-09-11 00:01:27.014339 | orchestrator | 00:01:27.010 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.014343 | orchestrator | 00:01:27.010 STDOUT terraform:  + security_groups = (known after apply) 2025-09-11 00:01:27.014346 | orchestrator | 00:01:27.010 STDOUT terraform:  + stop_before_destroy = false 2025-09-11 00:01:27.014350 | orchestrator | 00:01:27.010 STDOUT terraform:  + updated = (known after apply) 2025-09-11 00:01:27.014354 | orchestrator | 00:01:27.010 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-11 00:01:27.014357 | orchestrator | 00:01:27.010 STDOUT terraform:  + block_device { 2025-09-11 00:01:27.014361 | orchestrator | 00:01:27.010 STDOUT terraform:  + boot_index = 0 2025-09-11 00:01:27.014365 | orchestrator | 00:01:27.010 STDOUT terraform:  + delete_on_termination = false 2025-09-11 00:01:27.014369 | orchestrator | 00:01:27.010 STDOUT terraform:  + destination_type = "volume" 2025-09-11 00:01:27.014372 | orchestrator | 00:01:27.010 STDOUT terraform:  + multiattach = false 2025-09-11 00:01:27.014411 | orchestrator | 00:01:27.010 STDOUT terraform:  + source_type = "volume" 2025-09-11 00:01:27.014416 | orchestrator | 00:01:27.010 STDOUT terraform:  + uuid = (known after apply) 2025-09-11 00:01:27.014420 | orchestrator | 00:01:27.010 STDOUT terraform:  } 2025-09-11 00:01:27.014424 | orchestrator | 00:01:27.010 STDOUT terraform:  + network { 2025-09-11 00:01:27.014427 | orchestrator | 00:01:27.010 STDOUT terraform:  + access_network = false 2025-09-11 00:01:27.014431 | orchestrator | 00:01:27.010 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-11 00:01:27.014435 | orchestrator | 00:01:27.010 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-11 00:01:27.014439 | orchestrator | 00:01:27.010 STDOUT terraform:  + mac = (known after apply) 2025-09-11 00:01:27.014442 | orchestrator | 00:01:27.010 STDOUT terraform:  + name = (known after apply) 2025-09-11 00:01:27.014446 | orchestrator | 00:01:27.010 STDOUT terraform:  + port = (known after apply) 2025-09-11 00:01:27.014450 | orchestrator | 00:01:27.010 STDOUT terraform:  + uuid = (known after apply) 2025-09-11 00:01:27.014453 | orchestrator | 00:01:27.010 STDOUT terraform:  } 2025-09-11 00:01:27.014461 | orchestrator | 00:01:27.010 STDOUT terraform:  } 2025-09-11 00:01:27.014465 | orchestrator | 00:01:27.010 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-09-11 00:01:27.014469 | orchestrator | 00:01:27.010 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-11 00:01:27.014472 | orchestrator | 00:01:27.010 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-11 00:01:27.014476 | orchestrator | 00:01:27.010 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-11 00:01:27.014480 | orchestrator | 00:01:27.010 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-11 00:01:27.014483 | orchestrator | 00:01:27.010 STDOUT terraform:  + all_tags = (known after apply) 2025-09-11 00:01:27.014487 | orchestrator | 00:01:27.010 STDOUT terraform:  + availability_zone = "nova" 2025-09-11 00:01:27.014495 | orchestrator | 00:01:27.010 STDOUT terraform:  + config_drive = true 2025-09-11 00:01:27.014502 | orchestrator | 00:01:27.011 STDOUT terraform:  + created = (known after apply) 2025-09-11 00:01:27.014506 | orchestrator | 00:01:27.011 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-11 00:01:27.014509 | orchestrator | 00:01:27.011 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-11 00:01:27.014513 | orchestrator | 00:01:27.011 STDOUT terraform:  + force_delete = false 2025-09-11 00:01:27.014517 | orchestrator | 00:01:27.011 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-11 00:01:27.014521 | orchestrator | 00:01:27.011 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.014524 | orchestrator | 00:01:27.011 STDOUT terraform:  + image_id = (known after apply) 2025-09-11 00:01:27.014531 | orchestrator | 00:01:27.011 STDOUT terraform:  + image_name = (known after apply) 2025-09-11 00:01:27.014534 | orchestrator | 00:01:27.011 STDOUT terraform:  + key_pair = "testbed" 2025-09-11 00:01:27.014538 | orchestrator | 00:01:27.011 STDOUT terraform:  + name = "testbed-node-4" 2025-09-11 00:01:27.014542 | orchestrator | 00:01:27.011 STDOUT terraform:  + power_state = "active" 2025-09-11 00:01:27.014545 | orchestrator | 00:01:27.011 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.014549 | orchestrator | 00:01:27.011 STDOUT terraform:  + security_groups = (known after apply) 2025-09-11 00:01:27.014553 | orchestrator | 00:01:27.011 STDOUT terraform:  + stop_before_destroy = false 2025-09-11 00:01:27.014557 | orchestrator | 00:01:27.011 STDOUT terraform:  + updated = (known after apply) 2025-09-11 00:01:27.014560 | orchestrator | 00:01:27.011 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-11 00:01:27.014564 | orchestrator | 00:01:27.011 STDOUT terraform:  + block_device { 2025-09-11 00:01:27.014568 | orchestrator | 00:01:27.011 STDOUT terraform:  + boot_index = 0 2025-09-11 00:01:27.014572 | orchestrator | 00:01:27.011 STDOUT terraform:  + delete_on_termination = false 2025-09-11 00:01:27.014575 | orchestrator | 00:01:27.011 STDOUT terraform:  + destination_type = "volume" 2025-09-11 00:01:27.014579 | orchestrator | 00:01:27.011 STDOUT terraform:  + multiattach = false 2025-09-11 00:01:27.014583 | orchestrator | 00:01:27.011 STDOUT terraform:  + source_type = "volume" 2025-09-11 00:01:27.014586 | orchestrator | 00:01:27.011 STDOUT terraform:  + uuid = (known after apply) 2025-09-11 00:01:27.014590 | orchestrator | 00:01:27.011 STDOUT terraform:  } 2025-09-11 00:01:27.014594 | orchestrator | 00:01:27.011 STDOUT terraform:  + network { 2025-09-11 00:01:27.014598 | orchestrator | 00:01:27.011 STDOUT terraform:  + access_network = false 2025-09-11 00:01:27.014601 | orchestrator | 00:01:27.011 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-11 00:01:27.014605 | orchestrator | 00:01:27.011 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-11 00:01:27.014609 | orchestrator | 00:01:27.011 STDOUT terraform:  + mac = (known after apply) 2025-09-11 00:01:27.014619 | orchestrator | 00:01:27.011 STDOUT terraform:  + name = (known after apply) 2025-09-11 00:01:27.014623 | orchestrator | 00:01:27.011 STDOUT terraform:  + port = (known after apply) 2025-09-11 00:01:27.014627 | orchestrator | 00:01:27.011 STDOUT terraform:  + uuid = (known after apply) 2025-09-11 00:01:27.014631 | orchestrator | 00:01:27.011 STDOUT terraform:  } 2025-09-11 00:01:27.014634 | orchestrator | 00:01:27.011 STDOUT terraform:  } 2025-09-11 00:01:27.014638 | orchestrator | 00:01:27.012 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-09-11 00:01:27.014642 | orchestrator | 00:01:27.012 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-11 00:01:27.014646 | orchestrator | 00:01:27.012 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-11 00:01:27.014650 | orchestrator | 00:01:27.012 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-11 00:01:27.014653 | orchestrator | 00:01:27.012 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-11 00:01:27.014657 | orchestrator | 00:01:27.012 STDOUT terraform:  + all_tags = (known after apply) 2025-09-11 00:01:27.014661 | orchestrator | 00:01:27.012 STDOUT terraform:  + availability_zone = "nova" 2025-09-11 00:01:27.014664 | orchestrator | 00:01:27.012 STDOUT terraform:  + config_drive = true 2025-09-11 00:01:27.014668 | orchestrator | 00:01:27.012 STDOUT terraform:  + created = (known after apply) 2025-09-11 00:01:27.014672 | orchestrator | 00:01:27.012 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-11 00:01:27.014676 | orchestrator | 00:01:27.012 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-11 00:01:27.014679 | orchestrator | 00:01:27.012 STDOUT terraform:  + force_delete = false 2025-09-11 00:01:27.014683 | orchestrator | 00:01:27.012 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-11 00:01:27.014687 | orchestrator | 00:01:27.012 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.014691 | orchestrator | 00:01:27.012 STDOUT terraform:  + image_id = (known after apply) 2025-09-11 00:01:27.014694 | orchestrator | 00:01:27.012 STDOUT terraform:  + image_name = (known after apply) 2025-09-11 00:01:27.014698 | orchestrator | 00:01:27.012 STDOUT terraform:  + key_pair = "testbed" 2025-09-11 00:01:27.014702 | orchestrator | 00:01:27.012 STDOUT terraform:  + name = "testbed-node-5" 2025-09-11 00:01:27.014705 | orchestrator | 00:01:27.012 STDOUT terraform:  + power_state = "active" 2025-09-11 00:01:27.014709 | orchestrator | 00:01:27.012 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.014713 | orchestrator | 00:01:27.012 STDOUT terraform:  + security_groups = (known after apply) 2025-09-11 00:01:27.014716 | orchestrator | 00:01:27.012 STDOUT terraform:  + stop_before_destroy = false 2025-09-11 00:01:27.014720 | orchestrator | 00:01:27.012 STDOUT terraform:  + updated = (known after apply) 2025-09-11 00:01:27.014724 | orchestrator | 00:01:27.012 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-11 00:01:27.014728 | orchestrator | 00:01:27.012 STDOUT terraform:  + block_device { 2025-09-11 00:01:27.014734 | orchestrator | 00:01:27.012 STDOUT terraform:  + boot_index = 0 2025-09-11 00:01:27.014738 | orchestrator | 00:01:27.012 STDOUT terraform:  + delete_on_termination = false 2025-09-11 00:01:27.014742 | orchestrator | 00:01:27.012 STDOUT terraform:  + destination_type = "volume" 2025-09-11 00:01:27.014745 | orchestrator | 00:01:27.012 STDOUT terraform:  + multiattach = false 2025-09-11 00:01:27.014749 | orchestrator | 00:01:27.012 STDOUT terraform:  + source_type = "volume" 2025-09-11 00:01:27.014753 | orchestrator | 00:01:27.012 STDOUT terraform:  + uuid = (known after apply) 2025-09-11 00:01:27.014756 | orchestrator | 00:01:27.012 STDOUT terraform:  } 2025-09-11 00:01:27.014760 | orchestrator | 00:01:27.012 STDOUT terraform:  + network { 2025-09-11 00:01:27.014770 | orchestrator | 00:01:27.012 STDOUT terraform:  + access_network = false 2025-09-11 00:01:27.014774 | orchestrator | 00:01:27.012 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-11 00:01:27.014777 | orchestrator | 00:01:27.012 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-11 00:01:27.014781 | orchestrator | 00:01:27.012 STDOUT terraform:  + mac = (known after apply) 2025-09-11 00:01:27.014785 | orchestrator | 00:01:27.012 STDOUT terraform:  + name = (known after apply) 2025-09-11 00:01:27.014791 | orchestrator | 00:01:27.012 STDOUT terraform:  + port = (known after apply) 2025-09-11 00:01:27.014795 | orchestrator | 00:01:27.012 STDOUT terraform:  + uuid = (known after apply) 2025-09-11 00:01:27.014799 | orchestrator | 00:01:27.013 STDOUT terraform:  } 2025-09-11 00:01:27.014802 | orchestrator | 00:01:27.013 STDOUT terraform:  } 2025-09-11 00:01:27.014806 | orchestrator | 00:01:27.013 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-09-11 00:01:27.014810 | orchestrator | 00:01:27.013 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-09-11 00:01:27.014814 | orchestrator | 00:01:27.013 STDOUT terraform:  + fingerprint = (known after apply) 2025-09-11 00:01:27.014817 | orchestrator | 00:01:27.013 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.014821 | orchestrator | 00:01:27.013 STDOUT terraform:  + name = "testbed" 2025-09-11 00:01:27.014825 | orchestrator | 00:01:27.013 STDOUT terraform:  + private_key = (sensitive value) 2025-09-11 00:01:27.014829 | orchestrator | 00:01:27.013 STDOUT terraform:  + public_key = (known after apply) 2025-09-11 00:01:27.014832 | orchestrator | 00:01:27.013 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.014838 | orchestrator | 00:01:27.013 STDOUT terraform:  + user_id = (known after apply) 2025-09-11 00:01:27.014842 | orchestrator | 00:01:27.013 STDOUT terraform:  } 2025-09-11 00:01:27.014846 | orchestrator | 00:01:27.013 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-09-11 00:01:27.014850 | orchestrator | 00:01:27.013 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-11 00:01:27.014854 | orchestrator | 00:01:27.013 STDOUT terraform:  + device = (known after apply) 2025-09-11 00:01:27.014857 | orchestrator | 00:01:27.013 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.014864 | orchestrator | 00:01:27.013 STDOUT terraform:  + instance_id = (known after apply) 2025-09-11 00:01:27.014868 | orchestrator | 00:01:27.013 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.014872 | orchestrator | 00:01:27.013 STDOUT terraform:  + volume_id = (known after apply) 2025-09-11 00:01:27.014876 | orchestrator | 00:01:27.013 STDOUT terraform:  } 2025-09-11 00:01:27.014879 | orchestrator | 00:01:27.013 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-09-11 00:01:27.014883 | orchestrator | 00:01:27.013 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-11 00:01:27.014887 | orchestrator | 00:01:27.013 STDOUT terraform:  + device = (known after apply) 2025-09-11 00:01:27.014891 | orchestrator | 00:01:27.013 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.014894 | orchestrator | 00:01:27.013 STDOUT terraform:  + instance_id = (known after apply) 2025-09-11 00:01:27.014898 | orchestrator | 00:01:27.013 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.014902 | orchestrator | 00:01:27.013 STDOUT terraform:  + volume_id = (known after apply) 2025-09-11 00:01:27.014905 | orchestrator | 00:01:27.013 STDOUT terraform:  } 2025-09-11 00:01:27.014909 | orchestrator | 00:01:27.013 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-09-11 00:01:27.014913 | orchestrator | 00:01:27.013 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-11 00:01:27.014917 | orchestrator | 00:01:27.013 STDOUT terraform:  + device = (known after apply) 2025-09-11 00:01:27.014923 | orchestrator | 00:01:27.013 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.014927 | orchestrator | 00:01:27.013 STDOUT terraform:  + instance_id = (known after apply) 2025-09-11 00:01:27.014930 | orchestrator | 00:01:27.013 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.014934 | orchestrator | 00:01:27.013 STDOUT terraform:  + volume_id = (known after apply) 2025-09-11 00:01:27.014938 | orchestrator | 00:01:27.013 STDOUT terraform:  } 2025-09-11 00:01:27.014941 | orchestrator | 00:01:27.013 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-09-11 00:01:27.014945 | orchestrator | 00:01:27.013 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-11 00:01:27.014949 | orchestrator | 00:01:27.014 STDOUT terraform:  + device = (known after apply) 2025-09-11 00:01:27.014953 | orchestrator | 00:01:27.014 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.014956 | orchestrator | 00:01:27.014 STDOUT terraform:  + instance_id = (known after apply) 2025-09-11 00:01:27.014960 | orchestrator | 00:01:27.014 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.014964 | orchestrator | 00:01:27.014 STDOUT terraform:  + volume_id = (known after apply) 2025-09-11 00:01:27.014967 | orchestrator | 00:01:27.014 STDOUT terraform:  } 2025-09-11 00:01:27.014971 | orchestrator | 00:01:27.014 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-09-11 00:01:27.014982 | orchestrator | 00:01:27.014 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-11 00:01:27.014985 | orchestrator | 00:01:27.014 STDOUT terraform:  + device = (known after apply) 2025-09-11 00:01:27.014989 | orchestrator | 00:01:27.014 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.014993 | orchestrator | 00:01:27.014 STDOUT terraform:  + instance_id = (known after apply) 2025-09-11 00:01:27.014997 | orchestrator | 00:01:27.014 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.015000 | orchestrator | 00:01:27.014 STDOUT terraform:  + volume_id = (known after apply) 2025-09-11 00:01:27.015004 | orchestrator | 00:01:27.014 STDOUT terraform:  } 2025-09-11 00:01:27.015008 | orchestrator | 00:01:27.014 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-09-11 00:01:27.015012 | orchestrator | 00:01:27.014 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-11 00:01:27.015015 | orchestrator | 00:01:27.014 STDOUT terraform:  + device = (known after apply) 2025-09-11 00:01:27.015019 | orchestrator | 00:01:27.014 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.015023 | orchestrator | 00:01:27.014 STDOUT terraform:  + instance_id = (known after apply) 2025-09-11 00:01:27.015027 | orchestrator | 00:01:27.014 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.015030 | orchestrator | 00:01:27.014 STDOUT terraform:  + volume_id = (known after apply) 2025-09-11 00:01:27.015034 | orchestrator | 00:01:27.014 STDOUT terraform:  } 2025-09-11 00:01:27.015038 | orchestrator | 00:01:27.014 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-09-11 00:01:27.015041 | orchestrator | 00:01:27.014 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-11 00:01:27.015045 | orchestrator | 00:01:27.014 STDOUT terraform:  + device = (known after apply) 2025-09-11 00:01:27.015049 | orchestrator | 00:01:27.014 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.015053 | orchestrator | 00:01:27.014 STDOUT terraform:  + instance_id = (known after apply) 2025-09-11 00:01:27.015056 | orchestrator | 00:01:27.014 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.015060 | orchestrator | 00:01:27.014 STDOUT terraform:  + volume_id = (known after apply) 2025-09-11 00:01:27.015064 | orchestrator | 00:01:27.014 STDOUT terraform:  } 2025-09-11 00:01:27.015070 | orchestrator | 00:01:27.014 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-09-11 00:01:27.015074 | orchestrator | 00:01:27.014 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-11 00:01:27.015078 | orchestrator | 00:01:27.014 STDOUT terraform:  + device = (known after apply) 2025-09-11 00:01:27.015082 | orchestrator | 00:01:27.014 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.015086 | orchestrator | 00:01:27.015 STDOUT terraform:  + instance_id = (known after apply) 2025-09-11 00:01:27.015089 | orchestrator | 00:01:27.015 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.015098 | orchestrator | 00:01:27.015 STDOUT terraform:  + volume_id = (known after apply) 2025-09-11 00:01:27.015102 | orchestrator | 00:01:27.015 STDOUT terraform:  } 2025-09-11 00:01:27.015142 | orchestrator | 00:01:27.015 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-09-11 00:01:27.015188 | orchestrator | 00:01:27.015 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-11 00:01:27.015218 | orchestrator | 00:01:27.015 STDOUT terraform:  + device = (known after apply) 2025-09-11 00:01:27.015242 | orchestrator | 00:01:27.015 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.015269 | orchestrator | 00:01:27.015 STDOUT terraform:  + instance_id = (known after apply) 2025-09-11 00:01:27.015297 | orchestrator | 00:01:27.015 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.015324 | orchestrator | 00:01:27.015 STDOUT terraform:  + volume_id = (known after apply) 2025-09-11 00:01:27.015330 | orchestrator | 00:01:27.015 STDOUT terraform:  } 2025-09-11 00:01:27.015436 | orchestrator | 00:01:27.015 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-09-11 00:01:27.015492 | orchestrator | 00:01:27.015 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-09-11 00:01:27.015519 | orchestrator | 00:01:27.015 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-11 00:01:27.015546 | orchestrator | 00:01:27.015 STDOUT terraform:  + floating_ip = (known after apply) 2025-09-11 00:01:27.015576 | orchestrator | 00:01:27.015 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.015604 | orchestrator | 00:01:27.015 STDOUT terraform:  + port_id = (known after apply) 2025-09-11 00:01:27.015631 | orchestrator | 00:01:27.015 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.015637 | orchestrator | 00:01:27.015 STDOUT terraform:  } 2025-09-11 00:01:27.015685 | orchestrator | 00:01:27.015 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-09-11 00:01:27.015730 | orchestrator | 00:01:27.015 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-09-11 00:01:27.015753 | orchestrator | 00:01:27.015 STDOUT terraform:  + address = (known after apply) 2025-09-11 00:01:27.015777 | orchestrator | 00:01:27.015 STDOUT terraform:  + all_tags = (known after apply) 2025-09-11 00:01:27.015801 | orchestrator | 00:01:27.015 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-11 00:01:27.015824 | orchestrator | 00:01:27.015 STDOUT terraform:  + dns_name = (known after apply) 2025-09-11 00:01:27.015849 | orchestrator | 00:01:27.015 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-11 00:01:27.015873 | orchestrator | 00:01:27.015 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.015893 | orchestrator | 00:01:27.015 STDOUT terraform:  + pool = "public" 2025-09-11 00:01:27.015919 | orchestrator | 00:01:27.015 STDOUT terraform:  + port_id = (known after apply) 2025-09-11 00:01:27.015944 | orchestrator | 00:01:27.015 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.015967 | orchestrator | 00:01:27.015 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-11 00:01:27.015990 | orchestrator | 00:01:27.015 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-11 00:01:27.015996 | orchestrator | 00:01:27.015 STDOUT terraform:  } 2025-09-11 00:01:27.016041 | orchestrator | 00:01:27.015 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-09-11 00:01:27.016083 | orchestrator | 00:01:27.016 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-09-11 00:01:27.016118 | orchestrator | 00:01:27.016 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-11 00:01:27.016156 | orchestrator | 00:01:27.016 STDOUT terraform:  + all_tags = (known after apply) 2025-09-11 00:01:27.016179 | orchestrator | 00:01:27.016 STDOUT terraform:  + availability_zone_hints = [ 2025-09-11 00:01:27.016194 | orchestrator | 00:01:27.016 STDOUT terraform:  + "nova", 2025-09-11 00:01:27.016199 | orchestrator | 00:01:27.016 STDOUT terraform:  ] 2025-09-11 00:01:27.016237 | orchestrator | 00:01:27.016 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-11 00:01:27.016272 | orchestrator | 00:01:27.016 STDOUT terraform:  + external = (known after apply) 2025-09-11 00:01:27.016309 | orchestrator | 00:01:27.016 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.016344 | orchestrator | 00:01:27.016 STDOUT terraform:  + mtu = (known after apply) 2025-09-11 00:01:27.016395 | orchestrator | 00:01:27.016 STDOUT terraform:  + name = "net-testbed-management" 2025-09-11 00:01:27.016425 | orchestrator | 00:01:27.016 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-11 00:01:27.016462 | orchestrator | 00:01:27.016 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-11 00:01:27.016497 | orchestrator | 00:01:27.016 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.016535 | orchestrator | 00:01:27.016 STDOUT terraform:  + shared = (known after apply) 2025-09-11 00:01:27.016567 | orchestrator | 00:01:27.016 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-11 00:01:27.016603 | orchestrator | 00:01:27.016 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-09-11 00:01:27.016625 | orchestrator | 00:01:27.016 STDOUT terraform:  + segments (known after apply) 2025-09-11 00:01:27.016631 | orchestrator | 00:01:27.016 STDOUT terraform:  } 2025-09-11 00:01:27.016680 | orchestrator | 00:01:27.016 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-09-11 00:01:27.016723 | orchestrator | 00:01:27.016 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-09-11 00:01:27.016760 | orchestrator | 00:01:27.016 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-11 00:01:27.016795 | orchestrator | 00:01:27.016 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-11 00:01:27.016830 | orchestrator | 00:01:27.016 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-11 00:01:27.016865 | orchestrator | 00:01:27.016 STDOUT terraform:  + all_tags = (known after apply) 2025-09-11 00:01:27.016899 | orchestrator | 00:01:27.016 STDOUT terraform:  + device_id = (known after apply) 2025-09-11 00:01:27.016933 | orchestrator | 00:01:27.016 STDOUT terraform:  + device_owner = (known after apply) 2025-09-11 00:01:27.016968 | orchestrator | 00:01:27.016 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-11 00:01:27.017002 | orchestrator | 00:01:27.016 STDOUT terraform:  + dns_name = (known after apply) 2025-09-11 00:01:27.017039 | orchestrator | 00:01:27.016 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.017074 | orchestrator | 00:01:27.017 STDOUT terraform:  + mac_address = (known after apply) 2025-09-11 00:01:27.017109 | orchestrator | 00:01:27.017 STDOUT terraform:  + network_id = (known after apply) 2025-09-11 00:01:27.017143 | orchestrator | 00:01:27.017 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-11 00:01:27.017179 | orchestrator | 00:01:27.017 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-11 00:01:27.017214 | orchestrator | 00:01:27.017 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.017248 | orchestrator | 00:01:27.017 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-11 00:01:27.017282 | orchestrator | 00:01:27.017 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-11 00:01:27.017302 | orchestrator | 00:01:27.017 STDOUT terraform:  + allowed_address_pairs { 2025-09-11 00:01:27.017330 | orchestrator | 00:01:27.017 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-11 00:01:27.017337 | orchestrator | 00:01:27.017 STDOUT terraform:  } 2025-09-11 00:01:27.017360 | orchestrator | 00:01:27.017 STDOUT terraform:  + allowed_address_pairs { 2025-09-11 00:01:27.017417 | orchestrator | 00:01:27.017 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-11 00:01:27.017423 | orchestrator | 00:01:27.017 STDOUT terraform:  } 2025-09-11 00:01:27.017428 | orchestrator | 00:01:27.017 STDOUT terraform:  + binding (known after apply) 2025-09-11 00:01:27.017434 | orchestrator | 00:01:27.017 STDOUT terraform:  + fixed_ip { 2025-09-11 00:01:27.017459 | orchestrator | 00:01:27.017 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-09-11 00:01:27.017487 | orchestrator | 00:01:27.017 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-11 00:01:27.017493 | orchestrator | 00:01:27.017 STDOUT terraform:  } 2025-09-11 00:01:27.017510 | orchestrator | 00:01:27.017 STDOUT terraform:  } 2025-09-11 00:01:27.017555 | orchestrator | 00:01:27.017 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-09-11 00:01:27.017599 | orchestrator | 00:01:27.017 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-11 00:01:27.017639 | orchestrator | 00:01:27.017 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-11 00:01:27.017672 | orchestrator | 00:01:27.017 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-11 00:01:27.017707 | orchestrator | 00:01:27.017 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-11 00:01:27.017746 | orchestrator | 00:01:27.017 STDOUT terraform:  + all_tags = (known after apply) 2025-09-11 00:01:27.017781 | orchestrator | 00:01:27.017 STDOUT terraform:  + device_id = (known after apply) 2025-09-11 00:01:27.017817 | orchestrator | 00:01:27.017 STDOUT terraform:  + device_owner = (known after apply) 2025-09-11 00:01:27.017851 | orchestrator | 00:01:27.017 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-11 00:01:27.017888 | orchestrator | 00:01:27.017 STDOUT terraform:  + dns_name = (known after apply) 2025-09-11 00:01:27.017925 | orchestrator | 00:01:27.017 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.017963 | orchestrator | 00:01:27.017 STDOUT terraform:  + mac_address = (known after apply) 2025-09-11 00:01:27.018031 | orchestrator | 00:01:27.017 STDOUT terraform:  + network_id = (known after apply) 2025-09-11 00:01:27.018064 | orchestrator | 00:01:27.018 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-11 00:01:27.018102 | orchestrator | 00:01:27.018 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-11 00:01:27.018139 | orchestrator | 00:01:27.018 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.018176 | orchestrator | 00:01:27.018 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-11 00:01:27.018213 | orchestrator | 00:01:27.018 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-11 00:01:27.018233 | orchestrator | 00:01:27.018 STDOUT terraform:  + allowed_address_pairs { 2025-09-11 00:01:27.018261 | orchestrator | 00:01:27.018 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-11 00:01:27.018275 | orchestrator | 00:01:27.018 STDOUT terraform:  } 2025-09-11 00:01:27.018296 | orchestrator | 00:01:27.018 STDOUT terraform:  + allowed_address_pairs { 2025-09-11 00:01:27.018324 | orchestrator | 00:01:27.018 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-11 00:01:27.018339 | orchestrator | 00:01:27.018 STDOUT terraform:  } 2025-09-11 00:01:27.018360 | orchestrator | 00:01:27.018 STDOUT terraform:  + allowed_address_pairs { 2025-09-11 00:01:27.018415 | orchestrator | 00:01:27.018 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-11 00:01:27.018432 | orchestrator | 00:01:27.018 STDOUT terraform:  } 2025-09-11 00:01:27.018453 | orchestrator | 00:01:27.018 STDOUT terraform:  + allowed_address_pairs { 2025-09-11 00:01:27.018481 | orchestrator | 00:01:27.018 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-11 00:01:27.018495 | orchestrator | 00:01:27.018 STDOUT terraform:  } 2025-09-11 00:01:27.018519 | orchestrator | 00:01:27.018 STDOUT terraform:  + binding (known after apply) 2025-09-11 00:01:27.018533 | orchestrator | 00:01:27.018 STDOUT terraform:  + fixed_ip { 2025-09-11 00:01:27.018558 | orchestrator | 00:01:27.018 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-09-11 00:01:27.018587 | orchestrator | 00:01:27.018 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-11 00:01:27.018593 | orchestrator | 00:01:27.018 STDOUT terraform:  } 2025-09-11 00:01:27.018609 | orchestrator | 00:01:27.018 STDOUT terraform:  } 2025-09-11 00:01:27.018654 | orchestrator | 00:01:27.018 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-09-11 00:01:27.018701 | orchestrator | 00:01:27.018 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-11 00:01:27.018737 | orchestrator | 00:01:27.018 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-11 00:01:27.018773 | orchestrator | 00:01:27.018 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-11 00:01:27.018807 | orchestrator | 00:01:27.018 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-11 00:01:27.018845 | orchestrator | 00:01:27.018 STDOUT terraform:  + all_tags = (known after apply) 2025-09-11 00:01:27.018878 | orchestrator | 00:01:27.018 STDOUT terraform:  + device_id = (known after apply) 2025-09-11 00:01:27.018913 | orchestrator | 00:01:27.018 STDOUT terraform:  + device_owner = (known after apply) 2025-09-11 00:01:27.018947 | orchestrator | 00:01:27.018 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-11 00:01:27.018982 | orchestrator | 00:01:27.018 STDOUT terraform:  + dns_name = (known after apply) 2025-09-11 00:01:27.019017 | orchestrator | 00:01:27.018 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.019054 | orchestrator | 00:01:27.019 STDOUT terraform:  + mac_address = (known after apply) 2025-09-11 00:01:27.019088 | orchestrator | 00:01:27.019 STDOUT terraform:  + network_id = (known after apply) 2025-09-11 00:01:27.019122 | orchestrator | 00:01:27.019 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-11 00:01:27.019159 | orchestrator | 00:01:27.019 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-11 00:01:27.019194 | orchestrator | 00:01:27.019 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.019229 | orchestrator | 00:01:27.019 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-11 00:01:27.019264 | orchestrator | 00:01:27.019 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-11 00:01:27.019285 | orchestrator | 00:01:27.019 STDOUT terraform:  + allowed_address_pairs { 2025-09-11 00:01:27.019313 | orchestrator | 00:01:27.019 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-11 00:01:27.019319 | orchestrator | 00:01:27.019 STDOUT terraform:  } 2025-09-11 00:01:27.019341 | orchestrator | 00:01:27.019 STDOUT terraform:  + allowed_address_pairs { 2025-09-11 00:01:27.019370 | orchestrator | 00:01:27.019 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-11 00:01:27.019411 | orchestrator | 00:01:27.019 STDOUT terraform:  } 2025-09-11 00:01:27.019434 | orchestrator | 00:01:27.019 STDOUT terraform:  + allowed_address_pairs { 2025-09-11 00:01:27.019464 | orchestrator | 00:01:27.019 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-11 00:01:27.019477 | orchestrator | 00:01:27.019 STDOUT terraform:  } 2025-09-11 00:01:27.019515 | orchestrator | 00:01:27.019 STDOUT terraform:  + allowed_address_pairs { 2025-09-11 00:01:27.019521 | orchestrator | 00:01:27.019 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-11 00:01:27.019538 | orchestrator | 00:01:27.019 STDOUT terraform:  } 2025-09-11 00:01:27.019561 | orchestrator | 00:01:27.019 STDOUT terraform:  + binding (known after apply) 2025-09-11 00:01:27.019587 | orchestrator | 00:01:27.019 STDOUT terraform:  + fixed_ip { 2025-09-11 00:01:27.019599 | orchestrator | 00:01:27.019 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-09-11 00:01:27.019625 | orchestrator | 00:01:27.019 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-11 00:01:27.019631 | orchestrator | 00:01:27.019 STDOUT terraform:  } 2025-09-11 00:01:27.019647 | orchestrator | 00:01:27.019 STDOUT terraform:  } 2025-09-11 00:01:27.019692 | orchestrator | 00:01:27.019 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-09-11 00:01:27.019735 | orchestrator | 00:01:27.019 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-11 00:01:27.019770 | orchestrator | 00:01:27.019 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-11 00:01:27.019805 | orchestrator | 00:01:27.019 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-11 00:01:27.019839 | orchestrator | 00:01:27.019 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-11 00:01:27.019876 | orchestrator | 00:01:27.019 STDOUT terraform:  + all_tags = (known after apply) 2025-09-11 00:01:27.019913 | orchestrator | 00:01:27.019 STDOUT terraform:  + device_id = (known after apply) 2025-09-11 00:01:27.019949 | orchestrator | 00:01:27.019 STDOUT terraform:  + device_owner = (known after apply) 2025-09-11 00:01:27.019983 | orchestrator | 00:01:27.019 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-11 00:01:27.020018 | orchestrator | 00:01:27.019 STDOUT terraform:  + dns_name = (known after apply) 2025-09-11 00:01:27.020054 | orchestrator | 00:01:27.020 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.020089 | orchestrator | 00:01:27.020 STDOUT terraform:  + mac_address = (known after apply) 2025-09-11 00:01:27.020124 | orchestrator | 00:01:27.020 STDOUT terraform:  + network_id = (known after apply) 2025-09-11 00:01:27.020158 | orchestrator | 00:01:27.020 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-11 00:01:27.020193 | orchestrator | 00:01:27.020 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-11 00:01:27.020228 | orchestrator | 00:01:27.020 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.020261 | orchestrator | 00:01:27.020 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-11 00:01:27.020297 | orchestrator | 00:01:27.020 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-11 00:01:27.020317 | orchestrator | 00:01:27.020 STDOUT terraform:  + allowed_address_pairs { 2025-09-11 00:01:27.020345 | orchestrator | 00:01:27.020 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-11 00:01:27.020352 | orchestrator | 00:01:27.020 STDOUT terraform:  } 2025-09-11 00:01:27.020376 | orchestrator | 00:01:27.020 STDOUT terraform:  + allowed_address_pairs { 2025-09-11 00:01:27.020414 | orchestrator | 00:01:27.020 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-11 00:01:27.020420 | orchestrator | 00:01:27.020 STDOUT terraform:  } 2025-09-11 00:01:27.020442 | orchestrator | 00:01:27.020 STDOUT terraform:  + allowed_address_pairs { 2025-09-11 00:01:27.020469 | orchestrator | 00:01:27.020 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-11 00:01:27.020481 | orchestrator | 00:01:27.020 STDOUT terraform:  } 2025-09-11 00:01:27.020499 | orchestrator | 00:01:27.020 STDOUT terraform:  + allowed_address_pairs { 2025-09-11 00:01:27.020526 | orchestrator | 00:01:27.020 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-11 00:01:27.020540 | orchestrator | 00:01:27.020 STDOUT terraform:  } 2025-09-11 00:01:27.020562 | orchestrator | 00:01:27.020 STDOUT terraform:  + binding (known after apply) 2025-09-11 00:01:27.020577 | orchestrator | 00:01:27.020 STDOUT terraform:  + fixed_ip { 2025-09-11 00:01:27.020601 | orchestrator | 00:01:27.020 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-09-11 00:01:27.020629 | orchestrator | 00:01:27.020 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-11 00:01:27.020634 | orchestrator | 00:01:27.020 STDOUT terraform:  } 2025-09-11 00:01:27.020651 | orchestrator | 00:01:27.020 STDOUT terraform:  } 2025-09-11 00:01:27.020695 | orchestrator | 00:01:27.020 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-09-11 00:01:27.020738 | orchestrator | 00:01:27.020 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-11 00:01:27.020774 | orchestrator | 00:01:27.020 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-11 00:01:27.020808 | orchestrator | 00:01:27.020 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-11 00:01:27.020843 | orchestrator | 00:01:27.020 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-11 00:01:27.020877 | orchestrator | 00:01:27.020 STDOUT terraform:  + all_tags = (known after apply) 2025-09-11 00:01:27.020912 | orchestrator | 00:01:27.020 STDOUT terraform:  + device_id = (known after apply) 2025-09-11 00:01:27.020947 | orchestrator | 00:01:27.020 STDOUT terraform:  + device_owner = (known after apply) 2025-09-11 00:01:27.020987 | orchestrator | 00:01:27.020 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-11 00:01:27.021016 | orchestrator | 00:01:27.020 STDOUT terraform:  + dns_name = (known after apply) 2025-09-11 00:01:27.021053 | orchestrator | 00:01:27.021 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.021086 | orchestrator | 00:01:27.021 STDOUT terraform:  + mac_address = (known after apply) 2025-09-11 00:01:27.021120 | orchestrator | 00:01:27.021 STDOUT terraform:  + network_id = (known after apply) 2025-09-11 00:01:27.021157 | orchestrator | 00:01:27.021 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-11 00:01:27.021191 | orchestrator | 00:01:27.021 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-11 00:01:27.021226 | orchestrator | 00:01:27.021 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.021261 | orchestrator | 00:01:27.021 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-11 00:01:27.021296 | orchestrator | 00:01:27.021 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-11 00:01:27.021315 | orchestrator | 00:01:27.021 STDOUT terraform:  + allowed_address_pairs { 2025-09-11 00:01:27.021346 | orchestrator | 00:01:27.021 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-11 00:01:27.021363 | orchestrator | 00:01:27.021 STDOUT terraform:  } 2025-09-11 00:01:27.021368 | orchestrator | 00:01:27.021 STDOUT terraform:  + allowed_address_pairs { 2025-09-11 00:01:27.021408 | orchestrator | 00:01:27.021 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-11 00:01:27.021415 | orchestrator | 00:01:27.021 STDOUT terraform:  } 2025-09-11 00:01:27.021435 | orchestrator | 00:01:27.021 STDOUT terraform:  + allowed_address_pairs { 2025-09-11 00:01:27.021462 | orchestrator | 00:01:27.021 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-11 00:01:27.021469 | orchestrator | 00:01:27.021 STDOUT terraform:  } 2025-09-11 00:01:27.021491 | orchestrator | 00:01:27.021 STDOUT terraform:  + allowed_address_pairs { 2025-09-11 00:01:27.021517 | orchestrator | 00:01:27.021 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-11 00:01:27.021523 | orchestrator | 00:01:27.021 STDOUT terraform:  } 2025-09-11 00:01:27.021554 | orchestrator | 00:01:27.021 STDOUT terraform:  + binding (known after apply) 2025-09-11 00:01:27.021568 | orchestrator | 00:01:27.021 STDOUT terraform:  + fixed_ip { 2025-09-11 00:01:27.021591 | orchestrator | 00:01:27.021 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-09-11 00:01:27.021619 | orchestrator | 00:01:27.021 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-11 00:01:27.021641 | orchestrator | 00:01:27.021 STDOUT terraform:  } 2025-09-11 00:01:27.021654 | orchestrator | 00:01:27.021 STDOUT terraform:  } 2025-09-11 00:01:27.021699 | orchestrator | 00:01:27.021 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-09-11 00:01:27.021743 | orchestrator | 00:01:27.021 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-11 00:01:27.021778 | orchestrator | 00:01:27.021 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-11 00:01:27.021814 | orchestrator | 00:01:27.021 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-11 00:01:27.021848 | orchestrator | 00:01:27.021 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-11 00:01:27.021884 | orchestrator | 00:01:27.021 STDOUT terraform:  + all_tags = (known after apply) 2025-09-11 00:01:27.021918 | orchestrator | 00:01:27.021 STDOUT terraform:  + device_id = (known after apply) 2025-09-11 00:01:27.021953 | orchestrator | 00:01:27.021 STDOUT terraform:  + device_owner = (known after apply) 2025-09-11 00:01:27.021989 | orchestrator | 00:01:27.021 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-11 00:01:27.022039 | orchestrator | 00:01:27.021 STDOUT terraform:  + dns_name = (known after apply) 2025-09-11 00:01:27.022075 | orchestrator | 00:01:27.022 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.022111 | orchestrator | 00:01:27.022 STDOUT terraform:  + mac_address = (known after apply) 2025-09-11 00:01:27.022145 | orchestrator | 00:01:27.022 STDOUT terraform:  + network_id = (known after apply) 2025-09-11 00:01:27.022183 | orchestrator | 00:01:27.022 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-11 00:01:27.022216 | orchestrator | 00:01:27.022 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-11 00:01:27.022246 | orchestrator | 00:01:27.022 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.022280 | orchestrator | 00:01:27.022 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-11 00:01:27.022314 | orchestrator | 00:01:27.022 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-11 00:01:27.022333 | orchestrator | 00:01:27.022 STDOUT terraform:  + allowed_address_pairs { 2025-09-11 00:01:27.022362 | orchestrator | 00:01:27.022 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-11 00:01:27.022406 | orchestrator | 00:01:27.022 STDOUT terraform:  } 2025-09-11 00:01:27.022414 | orchestrator | 00:01:27.022 STDOUT terraform:  + allowed_address_pairs { 2025-09-11 00:01:27.022445 | orchestrator | 00:01:27.022 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-11 00:01:27.022459 | orchestrator | 00:01:27.022 STDOUT terraform:  } 2025-09-11 00:01:27.022478 | orchestrator | 00:01:27.022 STDOUT terraform:  + allowed_address_pairs { 2025-09-11 00:01:27.022505 | orchestrator | 00:01:27.022 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-11 00:01:27.022519 | orchestrator | 00:01:27.022 STDOUT terraform:  } 2025-09-11 00:01:27.022539 | orchestrator | 00:01:27.022 STDOUT terraform:  + allowed_address_pairs { 2025-09-11 00:01:27.022567 | orchestrator | 00:01:27.022 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-11 00:01:27.022582 | orchestrator | 00:01:27.022 STDOUT terraform:  } 2025-09-11 00:01:27.022605 | orchestrator | 00:01:27.022 STDOUT terraform:  + binding (known after apply) 2025-09-11 00:01:27.022619 | orchestrator | 00:01:27.022 STDOUT terraform:  + fixed_ip { 2025-09-11 00:01:27.022648 | orchestrator | 00:01:27.022 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-09-11 00:01:27.022678 | orchestrator | 00:01:27.022 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-11 00:01:27.022692 | orchestrator | 00:01:27.022 STDOUT terraform:  } 2025-09-11 00:01:27.022705 | orchestrator | 00:01:27.022 STDOUT terraform:  } 2025-09-11 00:01:27.022750 | orchestrator | 00:01:27.022 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-09-11 00:01:27.022794 | orchestrator | 00:01:27.022 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-11 00:01:27.022829 | orchestrator | 00:01:27.022 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-11 00:01:27.022864 | orchestrator | 00:01:27.022 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-11 00:01:27.022899 | orchestrator | 00:01:27.022 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-11 00:01:27.022934 | orchestrator | 00:01:27.022 STDOUT terraform:  + all_tags = (known after apply) 2025-09-11 00:01:27.022969 | orchestrator | 00:01:27.022 STDOUT terraform:  + device_id = (known after apply) 2025-09-11 00:01:27.023003 | orchestrator | 00:01:27.022 STDOUT terraform:  + device_owner = (known after apply) 2025-09-11 00:01:27.023040 | orchestrator | 00:01:27.023 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-11 00:01:27.023075 | orchestrator | 00:01:27.023 STDOUT terraform:  + dns_name = (known after apply) 2025-09-11 00:01:27.023109 | orchestrator | 00:01:27.023 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.023144 | orchestrator | 00:01:27.023 STDOUT terraform:  + mac_address = (known after apply) 2025-09-11 00:01:27.023178 | orchestrator | 00:01:27.023 STDOUT terraform:  + network_id = (known after apply) 2025-09-11 00:01:27.023212 | orchestrator | 00:01:27.023 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-11 00:01:27.023248 | orchestrator | 00:01:27.023 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-11 00:01:27.023283 | orchestrator | 00:01:27.023 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.023317 | orchestrator | 00:01:27.023 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-11 00:01:27.023352 | orchestrator | 00:01:27.023 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-11 00:01:27.023370 | orchestrator | 00:01:27.023 STDOUT terraform:  + allowed_address_pairs { 2025-09-11 00:01:27.023409 | orchestrator | 00:01:27.023 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-11 00:01:27.023415 | orchestrator | 00:01:27.023 STDOUT terraform:  } 2025-09-11 00:01:27.023438 | orchestrator | 00:01:27.023 STDOUT terraform:  + allowed_address_pairs { 2025-09-11 00:01:27.023465 | orchestrator | 00:01:27.023 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-11 00:01:27.023479 | orchestrator | 00:01:27.023 STDOUT terraform:  } 2025-09-11 00:01:27.023497 | orchestrator | 00:01:27.023 STDOUT terraform:  + allowed_address_pairs { 2025-09-11 00:01:27.023527 | orchestrator | 00:01:27.023 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-11 00:01:27.023533 | orchestrator | 00:01:27.023 STDOUT terraform:  } 2025-09-11 00:01:27.023554 | orchestrator | 00:01:27.023 STDOUT terraform:  + allowed_address_pairs { 2025-09-11 00:01:27.023582 | orchestrator | 00:01:27.023 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-11 00:01:27.023596 | orchestrator | 00:01:27.023 STDOUT terraform:  } 2025-09-11 00:01:27.023620 | orchestrator | 00:01:27.023 STDOUT terraform:  + binding (known after apply) 2025-09-11 00:01:27.023634 | orchestrator | 00:01:27.023 STDOUT terraform:  + fixed_ip { 2025-09-11 00:01:27.023658 | orchestrator | 00:01:27.023 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-09-11 00:01:27.023685 | orchestrator | 00:01:27.023 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-11 00:01:27.023691 | orchestrator | 00:01:27.023 STDOUT terraform:  } 2025-09-11 00:01:27.023708 | orchestrator | 00:01:27.023 STDOUT terraform:  } 2025-09-11 00:01:27.023754 | orchestrator | 00:01:27.023 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-09-11 00:01:27.023812 | orchestrator | 00:01:27.023 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-09-11 00:01:27.023840 | orchestrator | 00:01:27.023 STDOUT terraform:  + force_destroy = false 2025-09-11 00:01:27.023892 | orchestrator | 00:01:27.023 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.023939 | orchestrator | 00:01:27.023 STDOUT terraform:  + port_id = (known after apply) 2025-09-11 00:01:27.023970 | orchestrator | 00:01:27.023 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.023999 | orchestrator | 00:01:27.023 STDOUT terraform:  + router_id = (known after apply) 2025-09-11 00:01:27.024029 | orchestrator | 00:01:27.023 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-11 00:01:27.024044 | orchestrator | 00:01:27.024 STDOUT terraform:  } 2025-09-11 00:01:27.024080 | orchestrator | 00:01:27.024 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-09-11 00:01:27.024116 | orchestrator | 00:01:27.024 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-09-11 00:01:27.024155 | orchestrator | 00:01:27.024 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-11 00:01:27.024190 | orchestrator | 00:01:27.024 STDOUT terraform:  + all_tags = (known after apply) 2025-09-11 00:01:27.024212 | orchestrator | 00:01:27.024 STDOUT terraform:  + availability_zone_hints = [ 2025-09-11 00:01:27.024227 | orchestrator | 00:01:27.024 STDOUT terraform:  + "nova", 2025-09-11 00:01:27.024233 | orchestrator | 00:01:27.024 STDOUT terraform:  ] 2025-09-11 00:01:27.024269 | orchestrator | 00:01:27.024 STDOUT terraform:  + distributed = (known after apply) 2025-09-11 00:01:27.024306 | orchestrator | 00:01:27.024 STDOUT terraform:  + enable_snat = (known after apply) 2025-09-11 00:01:27.024355 | orchestrator | 00:01:27.024 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-09-11 00:01:27.024421 | orchestrator | 00:01:27.024 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-09-11 00:01:27.024452 | orchestrator | 00:01:27.024 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.024480 | orchestrator | 00:01:27.024 STDOUT terraform:  + name = "testbed" 2025-09-11 00:01:27.024516 | orchestrator | 00:01:27.024 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.025012 | orchestrator | 00:01:27.024 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-11 00:01:27.025052 | orchestrator | 00:01:27.025 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-09-11 00:01:27.025068 | orchestrator | 00:01:27.025 STDOUT terraform:  } 2025-09-11 00:01:27.025123 | orchestrator | 00:01:27.025 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-09-11 00:01:27.030599 | orchestrator | 00:01:27.025 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-09-11 00:01:27.030623 | orchestrator | 00:01:27.030 STDOUT terraform:  + description = "ssh" 2025-09-11 00:01:27.030658 | orchestrator | 00:01:27.030 STDOUT terraform:  + direction = "ingress" 2025-09-11 00:01:27.030681 | orchestrator | 00:01:27.030 STDOUT terraform:  + ethertype = "IPv4" 2025-09-11 00:01:27.030727 | orchestrator | 00:01:27.030 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.030749 | orchestrator | 00:01:27.030 STDOUT terraform:  + port_range_max = 22 2025-09-11 00:01:27.030773 | orchestrator | 00:01:27.030 STDOUT terraform:  + port_range_min = 22 2025-09-11 00:01:27.030798 | orchestrator | 00:01:27.030 STDOUT terraform:  + protocol = "tcp" 2025-09-11 00:01:27.030834 | orchestrator | 00:01:27.030 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.030870 | orchestrator | 00:01:27.030 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-11 00:01:27.030906 | orchestrator | 00:01:27.030 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-11 00:01:27.030934 | orchestrator | 00:01:27.030 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-11 00:01:27.030972 | orchestrator | 00:01:27.030 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-11 00:01:27.031003 | orchestrator | 00:01:27.030 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-11 00:01:27.031009 | orchestrator | 00:01:27.030 STDOUT terraform:  } 2025-09-11 00:01:27.031071 | orchestrator | 00:01:27.031 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-09-11 00:01:27.031116 | orchestrator | 00:01:27.031 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-09-11 00:01:27.031144 | orchestrator | 00:01:27.031 STDOUT terraform:  + description = "wireguard" 2025-09-11 00:01:27.031174 | orchestrator | 00:01:27.031 STDOUT terraform:  + direction = "ingress" 2025-09-11 00:01:27.031199 | orchestrator | 00:01:27.031 STDOUT terraform:  + ethertype = "IPv4" 2025-09-11 00:01:27.031235 | orchestrator | 00:01:27.031 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.031259 | orchestrator | 00:01:27.031 STDOUT terraform:  + port_range_max = 51820 2025-09-11 00:01:27.031283 | orchestrator | 00:01:27.031 STDOUT terraform:  + port_range_min = 51820 2025-09-11 00:01:27.031308 | orchestrator | 00:01:27.031 STDOUT terraform:  + protocol = "udp" 2025-09-11 00:01:27.031344 | orchestrator | 00:01:27.031 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.031396 | orchestrator | 00:01:27.031 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-11 00:01:27.031429 | orchestrator | 00:01:27.031 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-11 00:01:27.031457 | orchestrator | 00:01:27.031 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-11 00:01:27.031496 | orchestrator | 00:01:27.031 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-11 00:01:27.031528 | orchestrator | 00:01:27.031 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-11 00:01:27.031534 | orchestrator | 00:01:27.031 STDOUT terraform:  } 2025-09-11 00:01:27.031589 | orchestrator | 00:01:27.031 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-09-11 00:01:27.031640 | orchestrator | 00:01:27.031 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-09-11 00:01:27.031668 | orchestrator | 00:01:27.031 STDOUT terraform:  + direction = "ingress" 2025-09-11 00:01:27.031692 | orchestrator | 00:01:27.031 STDOUT terraform:  + ethertype = "IPv4" 2025-09-11 00:01:27.031729 | orchestrator | 00:01:27.031 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.031754 | orchestrator | 00:01:27.031 STDOUT terraform:  + protocol = "tcp" 2025-09-11 00:01:27.031788 | orchestrator | 00:01:27.031 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.031823 | orchestrator | 00:01:27.031 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-11 00:01:27.031857 | orchestrator | 00:01:27.031 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-11 00:01:27.031891 | orchestrator | 00:01:27.031 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-11 00:01:27.031926 | orchestrator | 00:01:27.031 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-11 00:01:27.031961 | orchestrator | 00:01:27.031 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-11 00:01:27.031967 | orchestrator | 00:01:27.031 STDOUT terraform:  } 2025-09-11 00:01:27.032021 | orchestrator | 00:01:27.031 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-09-11 00:01:27.032073 | orchestrator | 00:01:27.032 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-09-11 00:01:27.032103 | orchestrator | 00:01:27.032 STDOUT terraform:  + direction = "ingress" 2025-09-11 00:01:27.032128 | orchestrator | 00:01:27.032 STDOUT terraform:  + ethertype = "IPv4" 2025-09-11 00:01:27.032163 | orchestrator | 00:01:27.032 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.032187 | orchestrator | 00:01:27.032 STDOUT terraform:  + protocol = "udp" 2025-09-11 00:01:27.032223 | orchestrator | 00:01:27.032 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.032257 | orchestrator | 00:01:27.032 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-11 00:01:27.032292 | orchestrator | 00:01:27.032 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-11 00:01:27.032329 | orchestrator | 00:01:27.032 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-11 00:01:27.032362 | orchestrator | 00:01:27.032 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-11 00:01:27.032408 | orchestrator | 00:01:27.032 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-11 00:01:27.032415 | orchestrator | 00:01:27.032 STDOUT terraform:  } 2025-09-11 00:01:27.032469 | orchestrator | 00:01:27.032 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-09-11 00:01:27.032520 | orchestrator | 00:01:27.032 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-09-11 00:01:27.032547 | orchestrator | 00:01:27.032 STDOUT terraform:  + direction = "ingress" 2025-09-11 00:01:27.032572 | orchestrator | 00:01:27.032 STDOUT terraform:  + ethertype = "IPv4" 2025-09-11 00:01:27.032607 | orchestrator | 00:01:27.032 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.032632 | orchestrator | 00:01:27.032 STDOUT terraform:  + protocol = "icmp" 2025-09-11 00:01:27.032671 | orchestrator | 00:01:27.032 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.032702 | orchestrator | 00:01:27.032 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-11 00:01:27.032737 | orchestrator | 00:01:27.032 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-11 00:01:27.032766 | orchestrator | 00:01:27.032 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-11 00:01:27.032801 | orchestrator | 00:01:27.032 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-11 00:01:27.032836 | orchestrator | 00:01:27.032 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-11 00:01:27.032842 | orchestrator | 00:01:27.032 STDOUT terraform:  } 2025-09-11 00:01:27.032894 | orchestrator | 00:01:27.032 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-09-11 00:01:27.032983 | orchestrator | 00:01:27.032 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-09-11 00:01:27.033009 | orchestrator | 00:01:27.032 STDOUT terraform:  + direction = "ingress" 2025-09-11 00:01:27.033034 | orchestrator | 00:01:27.033 STDOUT terraform:  + ethertype = "IPv4" 2025-09-11 00:01:27.033071 | orchestrator | 00:01:27.033 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.033097 | orchestrator | 00:01:27.033 STDOUT terraform:  + protocol = "tcp" 2025-09-11 00:01:27.033133 | orchestrator | 00:01:27.033 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.033170 | orchestrator | 00:01:27.033 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-11 00:01:27.033206 | orchestrator | 00:01:27.033 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-11 00:01:27.033236 | orchestrator | 00:01:27.033 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-11 00:01:27.033272 | orchestrator | 00:01:27.033 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-11 00:01:27.033308 | orchestrator | 00:01:27.033 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-11 00:01:27.033314 | orchestrator | 00:01:27.033 STDOUT terraform:  } 2025-09-11 00:01:27.033368 | orchestrator | 00:01:27.033 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-09-11 00:01:27.033462 | orchestrator | 00:01:27.033 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-09-11 00:01:27.033492 | orchestrator | 00:01:27.033 STDOUT terraform:  + direction = "ingress" 2025-09-11 00:01:27.033517 | orchestrator | 00:01:27.033 STDOUT terraform:  + ethertype = "IPv4 2025-09-11 00:01:27.033581 | orchestrator | 00:01:27.033 STDOUT terraform: " 2025-09-11 00:01:27.033622 | orchestrator | 00:01:27.033 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.033647 | orchestrator | 00:01:27.033 STDOUT terraform:  + protocol = "udp" 2025-09-11 00:01:27.033683 | orchestrator | 00:01:27.033 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.033718 | orchestrator | 00:01:27.033 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-11 00:01:27.033763 | orchestrator | 00:01:27.033 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-11 00:01:27.033791 | orchestrator | 00:01:27.033 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-11 00:01:27.033827 | orchestrator | 00:01:27.033 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-11 00:01:27.033863 | orchestrator | 00:01:27.033 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-11 00:01:27.033869 | orchestrator | 00:01:27.033 STDOUT terraform:  } 2025-09-11 00:01:27.033925 | orchestrator | 00:01:27.033 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-09-11 00:01:27.033976 | orchestrator | 00:01:27.033 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-09-11 00:01:27.034007 | orchestrator | 00:01:27.033 STDOUT terraform:  + direction = "ingress" 2025-09-11 00:01:27.036220 | orchestrator | 00:01:27.034 STDOUT terraform:  + ethertype = "IPv4" 2025-09-11 00:01:27.036236 | orchestrator | 00:01:27.034 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.036240 | orchestrator | 00:01:27.034 STDOUT terraform:  + protocol = "icmp" 2025-09-11 00:01:27.036245 | orchestrator | 00:01:27.034 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.036249 | orchestrator | 00:01:27.034 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-11 00:01:27.036253 | orchestrator | 00:01:27.034 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-11 00:01:27.036257 | orchestrator | 00:01:27.034 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-11 00:01:27.036260 | orchestrator | 00:01:27.034 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-11 00:01:27.036264 | orchestrator | 00:01:27.034 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-11 00:01:27.036268 | orchestrator | 00:01:27.034 STDOUT terraform:  } 2025-09-11 00:01:27.036272 | orchestrator | 00:01:27.034 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-09-11 00:01:27.036276 | orchestrator | 00:01:27.034 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-09-11 00:01:27.036280 | orchestrator | 00:01:27.034 STDOUT terraform:  + description = "vrrp" 2025-09-11 00:01:27.036284 | orchestrator | 00:01:27.034 STDOUT terraform:  + direction = "ingress" 2025-09-11 00:01:27.036309 | orchestrator | 00:01:27.034 STDOUT terraform:  + ethertype = "IPv4" 2025-09-11 00:01:27.036313 | orchestrator | 00:01:27.034 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.036317 | orchestrator | 00:01:27.034 STDOUT terraform:  + protocol = "112" 2025-09-11 00:01:27.036321 | orchestrator | 00:01:27.034 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.036325 | orchestrator | 00:01:27.034 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-11 00:01:27.036328 | orchestrator | 00:01:27.034 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-11 00:01:27.036332 | orchestrator | 00:01:27.034 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-11 00:01:27.036345 | orchestrator | 00:01:27.034 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-11 00:01:27.036349 | orchestrator | 00:01:27.034 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-11 00:01:27.036352 | orchestrator | 00:01:27.034 STDOUT terraform:  } 2025-09-11 00:01:27.036356 | orchestrator | 00:01:27.034 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-09-11 00:01:27.036360 | orchestrator | 00:01:27.034 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-09-11 00:01:27.036364 | orchestrator | 00:01:27.034 STDOUT terraform:  + all_tags = (known after apply) 2025-09-11 00:01:27.036368 | orchestrator | 00:01:27.034 STDOUT terraform:  + description = "management security group" 2025-09-11 00:01:27.036371 | orchestrator | 00:01:27.034 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.036375 | orchestrator | 00:01:27.035 STDOUT terraform:  + name = "testbed-management" 2025-09-11 00:01:27.036399 | orchestrator | 00:01:27.035 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.036403 | orchestrator | 00:01:27.035 STDOUT terraform:  + stateful = (known after apply) 2025-09-11 00:01:27.036406 | orchestrator | 00:01:27.035 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-11 00:01:27.036410 | orchestrator | 00:01:27.035 STDOUT terraform:  } 2025-09-11 00:01:27.036414 | orchestrator | 00:01:27.035 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-09-11 00:01:27.036420 | orchestrator | 00:01:27.035 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-09-11 00:01:27.036429 | orchestrator | 00:01:27.035 STDOUT terraform:  + all_tags = (known after apply) 2025-09-11 00:01:27.036433 | orchestrator | 00:01:27.035 STDOUT terraform:  + description = "node security group" 2025-09-11 00:01:27.036437 | orchestrator | 00:01:27.035 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.036441 | orchestrator | 00:01:27.035 STDOUT terraform:  + name = "testbed-node" 2025-09-11 00:01:27.036445 | orchestrator | 00:01:27.035 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.036448 | orchestrator | 00:01:27.035 STDOUT terraform:  + stateful = (known after apply) 2025-09-11 00:01:27.036452 | orchestrator | 00:01:27.035 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-11 00:01:27.036456 | orchestrator | 00:01:27.035 STDOUT terraform:  } 2025-09-11 00:01:27.036460 | orchestrator | 00:01:27.035 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-09-11 00:01:27.036463 | orchestrator | 00:01:27.035 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-09-11 00:01:27.036467 | orchestrator | 00:01:27.035 STDOUT terraform:  + all_tags = (known after apply) 2025-09-11 00:01:27.036471 | orchestrator | 00:01:27.035 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-09-11 00:01:27.036475 | orchestrator | 00:01:27.035 STDOUT terraform:  + dns_nameservers = [ 2025-09-11 00:01:27.036478 | orchestrator | 00:01:27.035 STDOUT terraform:  + "8.8.8.8", 2025-09-11 00:01:27.036486 | orchestrator | 00:01:27.035 STDOUT terraform:  + "9.9.9.9", 2025-09-11 00:01:27.036490 | orchestrator | 00:01:27.035 STDOUT terraform:  ] 2025-09-11 00:01:27.036494 | orchestrator | 00:01:27.035 STDOUT terraform:  + enable_dhcp = true 2025-09-11 00:01:27.036498 | orchestrator | 00:01:27.035 STDOUT terraform:  + gateway_ip = (known after apply) 2025-09-11 00:01:27.036501 | orchestrator | 00:01:27.035 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.036505 | orchestrator | 00:01:27.035 STDOUT terraform:  + ip_version = 4 2025-09-11 00:01:27.036509 | orchestrator | 00:01:27.035 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-09-11 00:01:27.036513 | orchestrator | 00:01:27.035 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-09-11 00:01:27.036517 | orchestrator | 00:01:27.035 STDOUT terraform:  + name = "subnet-testbed-management" 2025-09-11 00:01:27.036520 | orchestrator | 00:01:27.035 STDOUT terraform:  + network_id = (known after apply) 2025-09-11 00:01:27.036524 | orchestrator | 00:01:27.035 STDOUT terraform:  + no_gateway = false 2025-09-11 00:01:27.036528 | orchestrator | 00:01:27.035 STDOUT terraform:  + region = (known after apply) 2025-09-11 00:01:27.036532 | orchestrator | 00:01:27.035 STDOUT terraform:  + service_types = (known after apply) 2025-09-11 00:01:27.036535 | orchestrator | 00:01:27.035 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-11 00:01:27.036539 | orchestrator | 00:01:27.035 STDOUT terraform:  + allocation_pool { 2025-09-11 00:01:27.036543 | orchestrator | 00:01:27.035 STDOUT terraform:  + end = "192.168.31.250" 2025-09-11 00:01:27.036547 | orchestrator | 00:01:27.035 STDOUT terraform:  + start = "192.168.31.200" 2025-09-11 00:01:27.036550 | orchestrator | 00:01:27.035 STDOUT terraform:  } 2025-09-11 00:01:27.036554 | orchestrator | 00:01:27.035 STDOUT terraform:  } 2025-09-11 00:01:27.036558 | orchestrator | 00:01:27.035 STDOUT terraform:  # terraform_data.image will be created 2025-09-11 00:01:27.036562 | orchestrator | 00:01:27.035 STDOUT terraform:  + resource "terraform_data" "image" { 2025-09-11 00:01:27.036566 | orchestrator | 00:01:27.035 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.036569 | orchestrator | 00:01:27.035 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-11 00:01:27.036573 | orchestrator | 00:01:27.035 STDOUT terraform:  + output = (known after apply) 2025-09-11 00:01:27.036577 | orchestrator | 00:01:27.035 STDOUT terraform:  } 2025-09-11 00:01:27.036583 | orchestrator | 00:01:27.035 STDOUT terraform:  # terraform_data.image_node will be created 2025-09-11 00:01:27.036589 | orchestrator | 00:01:27.036 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-09-11 00:01:27.036593 | orchestrator | 00:01:27.036 STDOUT terraform:  + id = (known after apply) 2025-09-11 00:01:27.036597 | orchestrator | 00:01:27.036 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-11 00:01:27.036601 | orchestrator | 00:01:27.036 STDOUT terraform:  + output = (known after apply) 2025-09-11 00:01:27.036605 | orchestrator | 00:01:27.036 STDOUT terraform:  } 2025-09-11 00:01:27.036608 | orchestrator | 00:01:27.036 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-09-11 00:01:27.036615 | orchestrator | 00:01:27.036 STDOUT terraform: Changes to Outputs: 2025-09-11 00:01:27.036619 | orchestrator | 00:01:27.036 STDOUT terraform:  + manager_address = (sensitive value) 2025-09-11 00:01:27.036623 | orchestrator | 00:01:27.036 STDOUT terraform:  + private_key = (sensitive value) 2025-09-11 00:01:27.146331 | orchestrator | 00:01:27.142 STDOUT terraform: terraform_data.image: Creating... 2025-09-11 00:01:27.220663 | orchestrator | 00:01:27.220 STDOUT terraform: terraform_data.image_node: Creating... 2025-09-11 00:01:27.220779 | orchestrator | 00:01:27.220 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=035b3912-4fc6-f5b9-ca04-f987c438c3cd] 2025-09-11 00:01:27.220935 | orchestrator | 00:01:27.220 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=02fb2fff-3c94-410e-5909-04016a0fff55] 2025-09-11 00:01:27.238338 | orchestrator | 00:01:27.238 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-09-11 00:01:27.238415 | orchestrator | 00:01:27.238 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-09-11 00:01:27.246946 | orchestrator | 00:01:27.246 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-09-11 00:01:27.252046 | orchestrator | 00:01:27.248 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-09-11 00:01:27.252077 | orchestrator | 00:01:27.249 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-09-11 00:01:27.252082 | orchestrator | 00:01:27.250 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-09-11 00:01:27.252087 | orchestrator | 00:01:27.250 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-09-11 00:01:27.252091 | orchestrator | 00:01:27.251 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-09-11 00:01:27.256452 | orchestrator | 00:01:27.256 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-09-11 00:01:27.256867 | orchestrator | 00:01:27.256 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-09-11 00:01:27.720421 | orchestrator | 00:01:27.718 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-11 00:01:27.727782 | orchestrator | 00:01:27.727 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-09-11 00:01:27.742274 | orchestrator | 00:01:27.742 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-09-11 00:01:27.749641 | orchestrator | 00:01:27.749 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-09-11 00:01:27.754959 | orchestrator | 00:01:27.754 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-11 00:01:27.761636 | orchestrator | 00:01:27.761 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-09-11 00:01:28.208091 | orchestrator | 00:01:28.207 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=a13e5168-3494-4e0a-a13a-168b1448d97c] 2025-09-11 00:01:28.214076 | orchestrator | 00:01:28.213 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-09-11 00:01:30.851898 | orchestrator | 00:01:30.851 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=a8a1b225-42a0-4e26-b86d-f2993393243d] 2025-09-11 00:01:30.862812 | orchestrator | 00:01:30.862 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-09-11 00:01:30.867300 | orchestrator | 00:01:30.867 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=1a36cf1994c1bc500b2999caceeae40ee2ca722d] 2025-09-11 00:01:30.869009 | orchestrator | 00:01:30.868 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=f923a2d7-e50a-4a10-a63c-46b2772477f3] 2025-09-11 00:01:30.873723 | orchestrator | 00:01:30.873 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-09-11 00:01:30.882142 | orchestrator | 00:01:30.881 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-09-11 00:01:30.883129 | orchestrator | 00:01:30.882 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=2cf4645b-2040-4422-b411-f526d3d4b2d7] 2025-09-11 00:01:30.892316 | orchestrator | 00:01:30.892 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-09-11 00:01:30.894607 | orchestrator | 00:01:30.894 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=75033e65-6f8e-4260-8d0b-0f414b2e283a] 2025-09-11 00:01:30.899819 | orchestrator | 00:01:30.899 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-09-11 00:01:30.913056 | orchestrator | 00:01:30.912 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=d4713353-19f0-445a-bb8a-6a961d38a233] 2025-09-11 00:01:30.918833 | orchestrator | 00:01:30.917 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-09-11 00:01:30.940167 | orchestrator | 00:01:30.939 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=64a40e05-4c55-4984-8320-b8e17729d0c1] 2025-09-11 00:01:30.950514 | orchestrator | 00:01:30.949 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-09-11 00:01:30.956034 | orchestrator | 00:01:30.955 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=58bdf74e44f1453ea5d2f5ca85abcabe9d31c3e9] 2025-09-11 00:01:30.959743 | orchestrator | 00:01:30.959 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-09-11 00:01:30.973216 | orchestrator | 00:01:30.973 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=8816fbb7-1f19-4e1f-8cd2-a94f4cca07a1] 2025-09-11 00:01:30.978329 | orchestrator | 00:01:30.978 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-09-11 00:01:30.990642 | orchestrator | 00:01:30.990 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=8046510b-ad40-4feb-b71a-a7eb3fa57256] 2025-09-11 00:01:30.995318 | orchestrator | 00:01:30.995 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=7e2337cd-4ca4-43ee-9815-6c22aae7aa7a] 2025-09-11 00:01:31.568090 | orchestrator | 00:01:31.567 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=c6b39b37-1573-4813-a204-b3511a0e9470] 2025-09-11 00:01:31.814410 | orchestrator | 00:01:31.814 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=981a4dc3-5682-4134-b4a8-835ceabf7a4d] 2025-09-11 00:01:31.822839 | orchestrator | 00:01:31.822 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-09-11 00:01:34.270575 | orchestrator | 00:01:34.270 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=fb39152d-f0f1-4dbf-b4d6-619450119bfd] 2025-09-11 00:01:34.289438 | orchestrator | 00:01:34.289 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=bb43bbb0-6966-49ea-aa1a-91c534974a2c] 2025-09-11 00:01:34.318316 | orchestrator | 00:01:34.317 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=024b17c6-95a2-4ffb-8436-08c360ae905c] 2025-09-11 00:01:34.329734 | orchestrator | 00:01:34.329 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc] 2025-09-11 00:01:34.341267 | orchestrator | 00:01:34.340 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=d34336cb-70d0-416f-8e5b-d5d62ae7b30e] 2025-09-11 00:01:34.348359 | orchestrator | 00:01:34.348 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=8a4df2d3-cf7a-465d-9036-caaca10fcbe1] 2025-09-11 00:01:35.520307 | orchestrator | 00:01:35.519 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 4s [id=6f6eaf17-ac62-4562-bc1d-eace001c9c58] 2025-09-11 00:01:35.529075 | orchestrator | 00:01:35.528 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-09-11 00:01:35.529189 | orchestrator | 00:01:35.528 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-09-11 00:01:35.530217 | orchestrator | 00:01:35.530 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-09-11 00:01:35.727797 | orchestrator | 00:01:35.722 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=c5ada31c-8393-4183-82b1-a696cdcaaeed] 2025-09-11 00:01:35.732974 | orchestrator | 00:01:35.732 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=e1a180ea-2266-408f-b0e9-76916fcc9923] 2025-09-11 00:01:35.743624 | orchestrator | 00:01:35.743 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-09-11 00:01:35.744344 | orchestrator | 00:01:35.744 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-09-11 00:01:35.745065 | orchestrator | 00:01:35.744 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-09-11 00:01:35.745627 | orchestrator | 00:01:35.745 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-09-11 00:01:35.753968 | orchestrator | 00:01:35.752 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-09-11 00:01:35.754281 | orchestrator | 00:01:35.754 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-09-11 00:01:35.755115 | orchestrator | 00:01:35.754 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-09-11 00:01:35.755570 | orchestrator | 00:01:35.755 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-09-11 00:01:35.755649 | orchestrator | 00:01:35.755 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-09-11 00:01:35.967325 | orchestrator | 00:01:35.966 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=160deda4-6982-4ef9-bc6a-7f3d834670ba] 2025-09-11 00:01:36.002109 | orchestrator | 00:01:35.983 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-09-11 00:01:36.187766 | orchestrator | 00:01:36.187 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=37675da0-3e21-433c-bd6d-892ee538d66d] 2025-09-11 00:01:36.194423 | orchestrator | 00:01:36.194 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-09-11 00:01:36.356141 | orchestrator | 00:01:36.355 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=48a22247-34a3-47e1-bd6e-d98f8094a5fe] 2025-09-11 00:01:36.363478 | orchestrator | 00:01:36.363 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-09-11 00:01:36.419946 | orchestrator | 00:01:36.419 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=4a596cd2-77c1-473b-a204-cf48e7d4f90a] 2025-09-11 00:01:36.425577 | orchestrator | 00:01:36.425 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 0s [id=80e23495-4ac5-488d-88b0-8f66b859d3ff] 2025-09-11 00:01:36.426051 | orchestrator | 00:01:36.425 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-09-11 00:01:36.430291 | orchestrator | 00:01:36.430 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-09-11 00:01:36.466569 | orchestrator | 00:01:36.466 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 0s [id=e8be9497-141d-4f79-b182-9327f13587ad] 2025-09-11 00:01:36.471214 | orchestrator | 00:01:36.470 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-09-11 00:01:36.485227 | orchestrator | 00:01:36.484 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 0s [id=5e96e4cc-26ff-45e4-895e-c4f64b066888] 2025-09-11 00:01:36.493917 | orchestrator | 00:01:36.493 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-09-11 00:01:36.619907 | orchestrator | 00:01:36.619 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=5199bbcf-a466-43e9-b1fe-93a38f15e911] 2025-09-11 00:01:36.653426 | orchestrator | 00:01:36.653 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=ccc263ce-d2d9-4a37-80b5-12b3a1254c02] 2025-09-11 00:01:36.815966 | orchestrator | 00:01:36.815 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=9010f73a-c3fd-4ff9-9944-0469b01464c9] 2025-09-11 00:01:36.826268 | orchestrator | 00:01:36.825 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=404390f0-e416-4708-8971-073860ebcae3] 2025-09-11 00:01:36.865426 | orchestrator | 00:01:36.861 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=efe5a958-2a56-4f86-84aa-df389a58acc9] 2025-09-11 00:01:36.871599 | orchestrator | 00:01:36.871 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=77536e8d-b160-4a50-9df6-0e6ba4567e62] 2025-09-11 00:01:37.114260 | orchestrator | 00:01:37.113 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=ea229d36-d854-4a77-836f-ccdf4ab4028c] 2025-09-11 00:01:37.296428 | orchestrator | 00:01:37.296 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=fe3d45e2-f826-45a8-8ae0-1cd5780ffde6] 2025-09-11 00:01:37.318362 | orchestrator | 00:01:37.318 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=fa1ed823-68b3-43d8-b642-0124c9dfc131] 2025-09-11 00:01:38.130091 | orchestrator | 00:01:38.129 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 2s [id=ee89c2cc-5c94-4e83-bde5-2023a52e234b] 2025-09-11 00:01:38.152356 | orchestrator | 00:01:38.152 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-09-11 00:01:38.160046 | orchestrator | 00:01:38.159 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-09-11 00:01:38.162364 | orchestrator | 00:01:38.162 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-09-11 00:01:38.165174 | orchestrator | 00:01:38.164 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-09-11 00:01:38.168205 | orchestrator | 00:01:38.168 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-09-11 00:01:38.178087 | orchestrator | 00:01:38.176 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-09-11 00:01:38.181942 | orchestrator | 00:01:38.181 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-09-11 00:01:39.500682 | orchestrator | 00:01:39.500 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=621bb149-3780-4183-8e65-87c2a5a9f9f9] 2025-09-11 00:01:39.509870 | orchestrator | 00:01:39.509 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-09-11 00:01:39.518410 | orchestrator | 00:01:39.518 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-09-11 00:01:39.519411 | orchestrator | 00:01:39.519 STDOUT terraform: local_file.inventory: Creating... 2025-09-11 00:01:39.521925 | orchestrator | 00:01:39.521 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=6bc617df006f0adead20eb86364ec2602cd35d6d] 2025-09-11 00:01:39.529590 | orchestrator | 00:01:39.529 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=33c76e0d0d49332af33766aa01a2612b7853d7c7] 2025-09-11 00:01:41.645069 | orchestrator | 00:01:41.644 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 2s [id=621bb149-3780-4183-8e65-87c2a5a9f9f9] 2025-09-11 00:01:48.165207 | orchestrator | 00:01:48.164 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-09-11 00:01:48.166448 | orchestrator | 00:01:48.166 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-09-11 00:01:48.170511 | orchestrator | 00:01:48.170 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-09-11 00:01:48.170725 | orchestrator | 00:01:48.170 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-09-11 00:01:48.177771 | orchestrator | 00:01:48.177 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-09-11 00:01:48.185115 | orchestrator | 00:01:48.184 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-09-11 00:01:58.168181 | orchestrator | 00:01:58.167 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-09-11 00:01:58.168344 | orchestrator | 00:01:58.168 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-09-11 00:01:58.171197 | orchestrator | 00:01:58.171 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-09-11 00:01:58.171364 | orchestrator | 00:01:58.171 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-09-11 00:01:58.178581 | orchestrator | 00:01:58.178 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-09-11 00:01:58.185773 | orchestrator | 00:01:58.185 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-09-11 00:01:58.737983 | orchestrator | 00:01:58.737 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=ff4640f7-dde3-4f06-a44d-6a60fdbb6192] 2025-09-11 00:01:58.800124 | orchestrator | 00:01:58.799 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=5dd446cb-22bb-4cec-8afb-5c37fe476cf2] 2025-09-11 00:01:58.864922 | orchestrator | 00:01:58.864 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 21s [id=67a27852-e3df-4041-901d-dc3be176f341] 2025-09-11 00:01:59.248583 | orchestrator | 00:01:59.248 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 21s [id=0c1b6110-060b-4bb7-9af6-8b51f7f38e88] 2025-09-11 00:02:08.173800 | orchestrator | 00:02:08.173 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-09-11 00:02:08.178860 | orchestrator | 00:02:08.178 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-09-11 00:02:09.649926 | orchestrator | 00:02:09.649 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 32s [id=8e0e8c36-98bc-4a21-8dd5-5dd13a7b4b9d] 2025-09-11 00:02:10.241472 | orchestrator | 00:02:10.241 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 32s [id=658153e2-e9db-4f67-91c8-f41161138e1c] 2025-09-11 00:02:10.250783 | orchestrator | 00:02:10.250 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-09-11 00:02:10.264233 | orchestrator | 00:02:10.263 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=6299879164953625307] 2025-09-11 00:02:10.271686 | orchestrator | 00:02:10.271 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-09-11 00:02:10.275929 | orchestrator | 00:02:10.275 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-09-11 00:02:10.276796 | orchestrator | 00:02:10.276 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-09-11 00:02:10.277222 | orchestrator | 00:02:10.277 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-09-11 00:02:10.284739 | orchestrator | 00:02:10.284 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-09-11 00:02:10.284973 | orchestrator | 00:02:10.284 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-09-11 00:02:10.294928 | orchestrator | 00:02:10.294 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-09-11 00:02:10.301471 | orchestrator | 00:02:10.301 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-09-11 00:02:10.305400 | orchestrator | 00:02:10.305 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-09-11 00:02:10.309460 | orchestrator | 00:02:10.309 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-09-11 00:02:13.687725 | orchestrator | 00:02:13.687 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=ff4640f7-dde3-4f06-a44d-6a60fdbb6192/75033e65-6f8e-4260-8d0b-0f414b2e283a] 2025-09-11 00:02:13.716886 | orchestrator | 00:02:13.716 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=658153e2-e9db-4f67-91c8-f41161138e1c/a8a1b225-42a0-4e26-b86d-f2993393243d] 2025-09-11 00:02:13.718325 | orchestrator | 00:02:13.717 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=5dd446cb-22bb-4cec-8afb-5c37fe476cf2/d4713353-19f0-445a-bb8a-6a961d38a233] 2025-09-11 00:02:13.735121 | orchestrator | 00:02:13.734 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=ff4640f7-dde3-4f06-a44d-6a60fdbb6192/7e2337cd-4ca4-43ee-9815-6c22aae7aa7a] 2025-09-11 00:02:13.748694 | orchestrator | 00:02:13.748 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=658153e2-e9db-4f67-91c8-f41161138e1c/8816fbb7-1f19-4e1f-8cd2-a94f4cca07a1] 2025-09-11 00:02:13.762622 | orchestrator | 00:02:13.762 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=5dd446cb-22bb-4cec-8afb-5c37fe476cf2/8046510b-ad40-4feb-b71a-a7eb3fa57256] 2025-09-11 00:02:19.833617 | orchestrator | 00:02:19.833 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 10s [id=658153e2-e9db-4f67-91c8-f41161138e1c/64a40e05-4c55-4984-8320-b8e17729d0c1] 2025-09-11 00:02:19.882816 | orchestrator | 00:02:19.882 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=ff4640f7-dde3-4f06-a44d-6a60fdbb6192/f923a2d7-e50a-4a10-a63c-46b2772477f3] 2025-09-11 00:02:20.171292 | orchestrator | 00:02:20.170 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=5dd446cb-22bb-4cec-8afb-5c37fe476cf2/2cf4645b-2040-4422-b411-f526d3d4b2d7] 2025-09-11 00:02:20.315659 | orchestrator | 00:02:20.315 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-09-11 00:02:30.316455 | orchestrator | 00:02:30.316 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-09-11 00:02:30.791236 | orchestrator | 00:02:30.790 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=2db97085-74be-4d4c-8607-1cb7c82ab306] 2025-09-11 00:02:30.807929 | orchestrator | 00:02:30.807 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-09-11 00:02:30.808016 | orchestrator | 00:02:30.807 STDOUT terraform: Outputs: 2025-09-11 00:02:30.808029 | orchestrator | 00:02:30.807 STDOUT terraform: manager_address = 2025-09-11 00:02:30.808038 | orchestrator | 00:02:30.807 STDOUT terraform: private_key = 2025-09-11 00:02:31.095505 | orchestrator | ok: Runtime: 0:01:09.361549 2025-09-11 00:02:31.129480 | 2025-09-11 00:02:31.129617 | TASK [Create infrastructure (stable)] 2025-09-11 00:02:31.663567 | orchestrator | skipping: Conditional result was False 2025-09-11 00:02:31.682074 | 2025-09-11 00:02:31.682228 | TASK [Fetch manager address] 2025-09-11 00:02:32.096685 | orchestrator | ok 2025-09-11 00:02:32.108877 | 2025-09-11 00:02:32.109017 | TASK [Set manager_host address] 2025-09-11 00:02:32.189934 | orchestrator | ok 2025-09-11 00:02:32.200208 | 2025-09-11 00:02:32.200408 | LOOP [Update ansible collections] 2025-09-11 00:02:33.027916 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-11 00:02:33.028221 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-11 00:02:33.028266 | orchestrator | Starting galaxy collection install process 2025-09-11 00:02:33.028297 | orchestrator | Process install dependency map 2025-09-11 00:02:33.028377 | orchestrator | Starting collection install process 2025-09-11 00:02:33.028403 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons' 2025-09-11 00:02:33.028434 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons 2025-09-11 00:02:33.028464 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-09-11 00:02:33.028523 | orchestrator | ok: Item: commons Runtime: 0:00:00.534252 2025-09-11 00:02:33.851512 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-11 00:02:33.851670 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-11 00:02:33.851718 | orchestrator | Starting galaxy collection install process 2025-09-11 00:02:33.851755 | orchestrator | Process install dependency map 2025-09-11 00:02:33.851790 | orchestrator | Starting collection install process 2025-09-11 00:02:33.851823 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services' 2025-09-11 00:02:33.851856 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services 2025-09-11 00:02:33.851888 | orchestrator | osism.services:999.0.0 was installed successfully 2025-09-11 00:02:33.851939 | orchestrator | ok: Item: services Runtime: 0:00:00.583703 2025-09-11 00:02:33.869625 | 2025-09-11 00:02:33.869771 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-11 00:02:44.419311 | orchestrator | ok 2025-09-11 00:02:44.429181 | 2025-09-11 00:02:44.429273 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-11 00:03:44.468869 | orchestrator | ok 2025-09-11 00:03:44.478380 | 2025-09-11 00:03:44.478485 | TASK [Fetch manager ssh hostkey] 2025-09-11 00:03:46.035795 | orchestrator | Output suppressed because no_log was given 2025-09-11 00:03:46.053700 | 2025-09-11 00:03:46.053889 | TASK [Get ssh keypair from terraform environment] 2025-09-11 00:03:46.589671 | orchestrator | ok: Runtime: 0:00:00.009794 2025-09-11 00:03:46.605022 | 2025-09-11 00:03:46.605182 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-11 00:03:46.654357 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-09-11 00:03:46.665058 | 2025-09-11 00:03:46.665195 | TASK [Run manager part 0] 2025-09-11 00:03:47.475318 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-11 00:03:47.518722 | orchestrator | 2025-09-11 00:03:47.518767 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-09-11 00:03:47.518774 | orchestrator | 2025-09-11 00:03:47.518788 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-09-11 00:03:49.039000 | orchestrator | ok: [testbed-manager] 2025-09-11 00:03:49.039049 | orchestrator | 2025-09-11 00:03:49.039071 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-11 00:03:49.039081 | orchestrator | 2025-09-11 00:03:49.039090 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-11 00:03:50.822164 | orchestrator | ok: [testbed-manager] 2025-09-11 00:03:50.822214 | orchestrator | 2025-09-11 00:03:50.822221 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-11 00:03:51.449762 | orchestrator | ok: [testbed-manager] 2025-09-11 00:03:51.449816 | orchestrator | 2025-09-11 00:03:51.449825 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-11 00:03:51.491018 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:03:51.491099 | orchestrator | 2025-09-11 00:03:51.491122 | orchestrator | TASK [Update package cache] **************************************************** 2025-09-11 00:03:51.517446 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:03:51.517483 | orchestrator | 2025-09-11 00:03:51.517492 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-11 00:03:51.546928 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:03:51.546970 | orchestrator | 2025-09-11 00:03:51.546976 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-11 00:03:51.581154 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:03:51.581182 | orchestrator | 2025-09-11 00:03:51.581188 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-11 00:03:51.610672 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:03:51.610700 | orchestrator | 2025-09-11 00:03:51.610706 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-09-11 00:03:51.639934 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:03:51.639970 | orchestrator | 2025-09-11 00:03:51.639979 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-09-11 00:03:51.666693 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:03:51.666730 | orchestrator | 2025-09-11 00:03:51.666738 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-09-11 00:03:52.341380 | orchestrator | changed: [testbed-manager] 2025-09-11 00:03:52.341427 | orchestrator | 2025-09-11 00:03:52.341434 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-09-11 00:06:26.348867 | orchestrator | changed: [testbed-manager] 2025-09-11 00:06:26.349036 | orchestrator | 2025-09-11 00:06:26.349058 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-11 00:08:13.175960 | orchestrator | changed: [testbed-manager] 2025-09-11 00:08:13.176054 | orchestrator | 2025-09-11 00:08:13.176071 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-11 00:08:34.680320 | orchestrator | changed: [testbed-manager] 2025-09-11 00:08:34.680406 | orchestrator | 2025-09-11 00:08:34.680424 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-11 00:08:42.619921 | orchestrator | changed: [testbed-manager] 2025-09-11 00:08:42.619966 | orchestrator | 2025-09-11 00:08:42.619975 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-11 00:08:42.669475 | orchestrator | ok: [testbed-manager] 2025-09-11 00:08:42.669516 | orchestrator | 2025-09-11 00:08:42.669526 | orchestrator | TASK [Get current user] ******************************************************** 2025-09-11 00:08:43.443423 | orchestrator | ok: [testbed-manager] 2025-09-11 00:08:43.444668 | orchestrator | 2025-09-11 00:08:43.444685 | orchestrator | TASK [Create venv directory] *************************************************** 2025-09-11 00:08:44.167884 | orchestrator | changed: [testbed-manager] 2025-09-11 00:08:44.167969 | orchestrator | 2025-09-11 00:08:44.167987 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-09-11 00:08:50.136891 | orchestrator | changed: [testbed-manager] 2025-09-11 00:08:50.136929 | orchestrator | 2025-09-11 00:08:50.136951 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-09-11 00:08:55.443366 | orchestrator | changed: [testbed-manager] 2025-09-11 00:08:55.443448 | orchestrator | 2025-09-11 00:08:55.443468 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-09-11 00:08:58.071309 | orchestrator | changed: [testbed-manager] 2025-09-11 00:08:58.071394 | orchestrator | 2025-09-11 00:08:58.071412 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-09-11 00:08:59.781543 | orchestrator | changed: [testbed-manager] 2025-09-11 00:08:59.781633 | orchestrator | 2025-09-11 00:08:59.781651 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-09-11 00:09:00.838966 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-11 00:09:00.839089 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-11 00:09:00.839104 | orchestrator | 2025-09-11 00:09:00.839117 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-09-11 00:09:00.936721 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-11 00:09:00.936771 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-11 00:09:00.936777 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-11 00:09:00.936782 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-11 00:09:03.928710 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-11 00:09:03.928772 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-11 00:09:03.928780 | orchestrator | 2025-09-11 00:09:03.928787 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-09-11 00:09:04.500351 | orchestrator | changed: [testbed-manager] 2025-09-11 00:09:04.500390 | orchestrator | 2025-09-11 00:09:04.500398 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-09-11 00:12:25.929613 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-09-11 00:12:25.929721 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-09-11 00:12:25.929741 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-09-11 00:12:25.929754 | orchestrator | 2025-09-11 00:12:25.929767 | orchestrator | TASK [Install local collections] *********************************************** 2025-09-11 00:12:29.072653 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-09-11 00:12:29.072736 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-09-11 00:12:29.072751 | orchestrator | 2025-09-11 00:12:29.072764 | orchestrator | PLAY [Create operator user] **************************************************** 2025-09-11 00:12:29.072777 | orchestrator | 2025-09-11 00:12:29.072789 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-11 00:12:30.484845 | orchestrator | ok: [testbed-manager] 2025-09-11 00:12:30.484928 | orchestrator | 2025-09-11 00:12:30.484947 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-11 00:12:30.534729 | orchestrator | ok: [testbed-manager] 2025-09-11 00:12:30.534782 | orchestrator | 2025-09-11 00:12:30.534791 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-11 00:12:30.600756 | orchestrator | ok: [testbed-manager] 2025-09-11 00:12:30.600806 | orchestrator | 2025-09-11 00:12:30.600813 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-11 00:12:31.384314 | orchestrator | changed: [testbed-manager] 2025-09-11 00:12:31.384398 | orchestrator | 2025-09-11 00:12:31.384415 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-11 00:12:32.110236 | orchestrator | changed: [testbed-manager] 2025-09-11 00:12:32.110299 | orchestrator | 2025-09-11 00:12:32.110309 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-11 00:12:33.428765 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-09-11 00:12:33.428847 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-09-11 00:12:33.428862 | orchestrator | 2025-09-11 00:12:33.428888 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-11 00:12:34.780683 | orchestrator | changed: [testbed-manager] 2025-09-11 00:12:34.780795 | orchestrator | 2025-09-11 00:12:34.780813 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-11 00:12:36.530734 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-09-11 00:12:36.530885 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-09-11 00:12:36.530901 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-09-11 00:12:36.530912 | orchestrator | 2025-09-11 00:12:36.530925 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-11 00:12:36.590350 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:12:36.590432 | orchestrator | 2025-09-11 00:12:36.590447 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-11 00:12:37.111335 | orchestrator | changed: [testbed-manager] 2025-09-11 00:12:37.111390 | orchestrator | 2025-09-11 00:12:37.111401 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-11 00:12:37.180632 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:12:37.180681 | orchestrator | 2025-09-11 00:12:37.180690 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-11 00:12:38.035840 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-11 00:12:38.035931 | orchestrator | changed: [testbed-manager] 2025-09-11 00:12:38.035949 | orchestrator | 2025-09-11 00:12:38.035962 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-11 00:12:38.075660 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:12:38.075742 | orchestrator | 2025-09-11 00:12:38.075759 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-11 00:12:38.112135 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:12:38.112213 | orchestrator | 2025-09-11 00:12:38.112230 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-11 00:12:38.144549 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:12:38.144609 | orchestrator | 2025-09-11 00:12:38.144624 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-11 00:12:38.188683 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:12:38.188757 | orchestrator | 2025-09-11 00:12:38.188774 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-11 00:12:38.894777 | orchestrator | ok: [testbed-manager] 2025-09-11 00:12:38.894824 | orchestrator | 2025-09-11 00:12:38.894830 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-11 00:12:38.894835 | orchestrator | 2025-09-11 00:12:38.894839 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-11 00:12:40.312301 | orchestrator | ok: [testbed-manager] 2025-09-11 00:12:40.312389 | orchestrator | 2025-09-11 00:12:40.312406 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-09-11 00:12:41.257203 | orchestrator | changed: [testbed-manager] 2025-09-11 00:12:41.257318 | orchestrator | 2025-09-11 00:12:41.257337 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:12:41.257350 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-09-11 00:12:41.257361 | orchestrator | 2025-09-11 00:12:41.511325 | orchestrator | ok: Runtime: 0:08:54.400778 2025-09-11 00:12:41.528659 | 2025-09-11 00:12:41.528816 | TASK [Point out that the log in on the manager is now possible] 2025-09-11 00:12:41.575293 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-09-11 00:12:41.584246 | 2025-09-11 00:12:41.584367 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-11 00:12:41.619442 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-09-11 00:12:41.627506 | 2025-09-11 00:12:41.627616 | TASK [Run manager part 1 + 2] 2025-09-11 00:12:42.417119 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-11 00:12:42.469237 | orchestrator | 2025-09-11 00:12:42.469323 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-09-11 00:12:42.469331 | orchestrator | 2025-09-11 00:12:42.469344 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-11 00:12:45.325448 | orchestrator | ok: [testbed-manager] 2025-09-11 00:12:45.325487 | orchestrator | 2025-09-11 00:12:45.325516 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-11 00:12:45.358116 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:12:45.358151 | orchestrator | 2025-09-11 00:12:45.358159 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-11 00:12:45.390236 | orchestrator | ok: [testbed-manager] 2025-09-11 00:12:45.390287 | orchestrator | 2025-09-11 00:12:45.390296 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-11 00:12:45.434864 | orchestrator | ok: [testbed-manager] 2025-09-11 00:12:45.434903 | orchestrator | 2025-09-11 00:12:45.434912 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-11 00:12:45.495765 | orchestrator | ok: [testbed-manager] 2025-09-11 00:12:45.495805 | orchestrator | 2025-09-11 00:12:45.495815 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-11 00:12:45.550250 | orchestrator | ok: [testbed-manager] 2025-09-11 00:12:45.550324 | orchestrator | 2025-09-11 00:12:45.550334 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-11 00:12:45.589722 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-09-11 00:12:45.589747 | orchestrator | 2025-09-11 00:12:45.589752 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-11 00:12:46.195456 | orchestrator | ok: [testbed-manager] 2025-09-11 00:12:46.195501 | orchestrator | 2025-09-11 00:12:46.195509 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-11 00:12:46.242568 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:12:46.242606 | orchestrator | 2025-09-11 00:12:46.242613 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-11 00:12:47.405518 | orchestrator | changed: [testbed-manager] 2025-09-11 00:12:47.405562 | orchestrator | 2025-09-11 00:12:47.405571 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-11 00:12:47.910074 | orchestrator | ok: [testbed-manager] 2025-09-11 00:12:47.910107 | orchestrator | 2025-09-11 00:12:47.910112 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-11 00:12:48.822310 | orchestrator | changed: [testbed-manager] 2025-09-11 00:12:48.822432 | orchestrator | 2025-09-11 00:12:48.822442 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-11 00:13:05.106418 | orchestrator | changed: [testbed-manager] 2025-09-11 00:13:05.106513 | orchestrator | 2025-09-11 00:13:05.106530 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-11 00:13:05.761927 | orchestrator | ok: [testbed-manager] 2025-09-11 00:13:05.762010 | orchestrator | 2025-09-11 00:13:05.762058 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-11 00:13:05.814444 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:13:05.814515 | orchestrator | 2025-09-11 00:13:05.814529 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-09-11 00:13:06.718082 | orchestrator | changed: [testbed-manager] 2025-09-11 00:13:06.718707 | orchestrator | 2025-09-11 00:13:06.718733 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-09-11 00:13:07.648346 | orchestrator | changed: [testbed-manager] 2025-09-11 00:13:07.648406 | orchestrator | 2025-09-11 00:13:07.648421 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-09-11 00:13:08.181388 | orchestrator | changed: [testbed-manager] 2025-09-11 00:13:08.181447 | orchestrator | 2025-09-11 00:13:08.181462 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-09-11 00:13:08.219006 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-11 00:13:08.219062 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-11 00:13:08.219067 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-11 00:13:08.219072 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-11 00:13:10.183477 | orchestrator | changed: [testbed-manager] 2025-09-11 00:13:10.183524 | orchestrator | 2025-09-11 00:13:10.183534 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-09-11 00:13:18.814534 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-09-11 00:13:18.814578 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-09-11 00:13:18.814588 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-09-11 00:13:18.814595 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-09-11 00:13:18.814606 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-09-11 00:13:18.814612 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-09-11 00:13:18.814619 | orchestrator | 2025-09-11 00:13:18.814626 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-09-11 00:13:19.869812 | orchestrator | changed: [testbed-manager] 2025-09-11 00:13:19.869854 | orchestrator | 2025-09-11 00:13:19.869863 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-09-11 00:13:19.912535 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:13:19.912570 | orchestrator | 2025-09-11 00:13:19.912578 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-09-11 00:13:22.960838 | orchestrator | changed: [testbed-manager] 2025-09-11 00:13:22.960879 | orchestrator | 2025-09-11 00:13:22.960887 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-09-11 00:13:23.003771 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:13:23.003814 | orchestrator | 2025-09-11 00:13:23.003825 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-09-11 00:15:01.359901 | orchestrator | changed: [testbed-manager] 2025-09-11 00:15:01.359938 | orchestrator | 2025-09-11 00:15:01.359945 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-11 00:15:02.463240 | orchestrator | ok: [testbed-manager] 2025-09-11 00:15:02.463278 | orchestrator | 2025-09-11 00:15:02.463286 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:15:02.463293 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-09-11 00:15:02.463299 | orchestrator | 2025-09-11 00:15:02.747450 | orchestrator | ok: Runtime: 0:02:20.642734 2025-09-11 00:15:02.763772 | 2025-09-11 00:15:02.763959 | TASK [Reboot manager] 2025-09-11 00:15:04.300458 | orchestrator | ok: Runtime: 0:00:00.919345 2025-09-11 00:15:04.319999 | 2025-09-11 00:15:04.320278 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-11 00:15:17.737397 | orchestrator | ok 2025-09-11 00:15:17.747342 | 2025-09-11 00:15:17.747466 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-11 00:16:17.787329 | orchestrator | ok 2025-09-11 00:16:17.796937 | 2025-09-11 00:16:17.797128 | TASK [Deploy manager + bootstrap nodes] 2025-09-11 00:16:20.346573 | orchestrator | 2025-09-11 00:16:20.346773 | orchestrator | # DEPLOY MANAGER 2025-09-11 00:16:20.346797 | orchestrator | 2025-09-11 00:16:20.346811 | orchestrator | + set -e 2025-09-11 00:16:20.346824 | orchestrator | + echo 2025-09-11 00:16:20.346838 | orchestrator | + echo '# DEPLOY MANAGER' 2025-09-11 00:16:20.346855 | orchestrator | + echo 2025-09-11 00:16:20.346905 | orchestrator | + cat /opt/manager-vars.sh 2025-09-11 00:16:20.349779 | orchestrator | export NUMBER_OF_NODES=6 2025-09-11 00:16:20.349809 | orchestrator | 2025-09-11 00:16:20.349821 | orchestrator | export CEPH_VERSION=reef 2025-09-11 00:16:20.349834 | orchestrator | export CONFIGURATION_VERSION=main 2025-09-11 00:16:20.349847 | orchestrator | export MANAGER_VERSION=latest 2025-09-11 00:16:20.349870 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-09-11 00:16:20.349881 | orchestrator | 2025-09-11 00:16:20.349926 | orchestrator | export ARA=false 2025-09-11 00:16:20.349940 | orchestrator | export DEPLOY_MODE=manager 2025-09-11 00:16:20.349957 | orchestrator | export TEMPEST=true 2025-09-11 00:16:20.349969 | orchestrator | export IS_ZUUL=true 2025-09-11 00:16:20.349980 | orchestrator | 2025-09-11 00:16:20.349998 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.14 2025-09-11 00:16:20.350009 | orchestrator | export EXTERNAL_API=false 2025-09-11 00:16:20.350070 | orchestrator | 2025-09-11 00:16:20.350082 | orchestrator | export IMAGE_USER=ubuntu 2025-09-11 00:16:20.350096 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-09-11 00:16:20.350107 | orchestrator | 2025-09-11 00:16:20.350118 | orchestrator | export CEPH_STACK=ceph-ansible 2025-09-11 00:16:20.350137 | orchestrator | 2025-09-11 00:16:20.350148 | orchestrator | + echo 2025-09-11 00:16:20.350160 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-11 00:16:20.351228 | orchestrator | ++ export INTERACTIVE=false 2025-09-11 00:16:20.351261 | orchestrator | ++ INTERACTIVE=false 2025-09-11 00:16:20.351274 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-11 00:16:20.351286 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-11 00:16:20.351334 | orchestrator | + source /opt/manager-vars.sh 2025-09-11 00:16:20.351347 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-11 00:16:20.351358 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-11 00:16:20.351369 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-11 00:16:20.351405 | orchestrator | ++ CEPH_VERSION=reef 2025-09-11 00:16:20.351426 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-11 00:16:20.351437 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-11 00:16:20.351453 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-11 00:16:20.351502 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-11 00:16:20.351527 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-11 00:16:20.351547 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-11 00:16:20.351558 | orchestrator | ++ export ARA=false 2025-09-11 00:16:20.351569 | orchestrator | ++ ARA=false 2025-09-11 00:16:20.351580 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-11 00:16:20.351590 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-11 00:16:20.351601 | orchestrator | ++ export TEMPEST=true 2025-09-11 00:16:20.351625 | orchestrator | ++ TEMPEST=true 2025-09-11 00:16:20.351637 | orchestrator | ++ export IS_ZUUL=true 2025-09-11 00:16:20.351647 | orchestrator | ++ IS_ZUUL=true 2025-09-11 00:16:20.351658 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.14 2025-09-11 00:16:20.351669 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.14 2025-09-11 00:16:20.351679 | orchestrator | ++ export EXTERNAL_API=false 2025-09-11 00:16:20.351690 | orchestrator | ++ EXTERNAL_API=false 2025-09-11 00:16:20.351701 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-11 00:16:20.351711 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-11 00:16:20.351722 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-11 00:16:20.351733 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-11 00:16:20.351748 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-11 00:16:20.351759 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-11 00:16:20.351770 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-09-11 00:16:20.408349 | orchestrator | + docker version 2025-09-11 00:16:20.672935 | orchestrator | Client: Docker Engine - Community 2025-09-11 00:16:20.673015 | orchestrator | Version: 27.5.1 2025-09-11 00:16:20.673027 | orchestrator | API version: 1.47 2025-09-11 00:16:20.673041 | orchestrator | Go version: go1.22.11 2025-09-11 00:16:20.673051 | orchestrator | Git commit: 9f9e405 2025-09-11 00:16:20.673062 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-11 00:16:20.673075 | orchestrator | OS/Arch: linux/amd64 2025-09-11 00:16:20.673085 | orchestrator | Context: default 2025-09-11 00:16:20.673096 | orchestrator | 2025-09-11 00:16:20.673107 | orchestrator | Server: Docker Engine - Community 2025-09-11 00:16:20.673118 | orchestrator | Engine: 2025-09-11 00:16:20.673129 | orchestrator | Version: 27.5.1 2025-09-11 00:16:20.673140 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-09-11 00:16:20.673180 | orchestrator | Go version: go1.22.11 2025-09-11 00:16:20.673191 | orchestrator | Git commit: 4c9b3b0 2025-09-11 00:16:20.673202 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-11 00:16:20.673212 | orchestrator | OS/Arch: linux/amd64 2025-09-11 00:16:20.673223 | orchestrator | Experimental: false 2025-09-11 00:16:20.673234 | orchestrator | containerd: 2025-09-11 00:16:20.673245 | orchestrator | Version: 1.7.27 2025-09-11 00:16:20.673256 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-09-11 00:16:20.673267 | orchestrator | runc: 2025-09-11 00:16:20.673278 | orchestrator | Version: 1.2.5 2025-09-11 00:16:20.673289 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-09-11 00:16:20.673299 | orchestrator | docker-init: 2025-09-11 00:16:20.673310 | orchestrator | Version: 0.19.0 2025-09-11 00:16:20.673322 | orchestrator | GitCommit: de40ad0 2025-09-11 00:16:20.676363 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-09-11 00:16:20.683711 | orchestrator | + set -e 2025-09-11 00:16:20.683796 | orchestrator | + source /opt/manager-vars.sh 2025-09-11 00:16:20.683810 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-11 00:16:20.683821 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-11 00:16:20.683832 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-11 00:16:20.683843 | orchestrator | ++ CEPH_VERSION=reef 2025-09-11 00:16:20.683854 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-11 00:16:20.683865 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-11 00:16:20.683875 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-11 00:16:20.683886 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-11 00:16:20.683897 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-11 00:16:20.683908 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-11 00:16:20.683918 | orchestrator | ++ export ARA=false 2025-09-11 00:16:20.683929 | orchestrator | ++ ARA=false 2025-09-11 00:16:20.683940 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-11 00:16:20.683951 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-11 00:16:20.683962 | orchestrator | ++ export TEMPEST=true 2025-09-11 00:16:20.683972 | orchestrator | ++ TEMPEST=true 2025-09-11 00:16:20.683983 | orchestrator | ++ export IS_ZUUL=true 2025-09-11 00:16:20.683993 | orchestrator | ++ IS_ZUUL=true 2025-09-11 00:16:20.684004 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.14 2025-09-11 00:16:20.684015 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.14 2025-09-11 00:16:20.684026 | orchestrator | ++ export EXTERNAL_API=false 2025-09-11 00:16:20.684036 | orchestrator | ++ EXTERNAL_API=false 2025-09-11 00:16:20.684047 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-11 00:16:20.684058 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-11 00:16:20.684068 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-11 00:16:20.684079 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-11 00:16:20.684090 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-11 00:16:20.684101 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-11 00:16:20.684111 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-11 00:16:20.684122 | orchestrator | ++ export INTERACTIVE=false 2025-09-11 00:16:20.684133 | orchestrator | ++ INTERACTIVE=false 2025-09-11 00:16:20.684143 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-11 00:16:20.684157 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-11 00:16:20.684173 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-11 00:16:20.684184 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-11 00:16:20.684195 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-09-11 00:16:20.691575 | orchestrator | + set -e 2025-09-11 00:16:20.691683 | orchestrator | + VERSION=reef 2025-09-11 00:16:20.692506 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-11 00:16:20.700984 | orchestrator | + [[ -n ceph_version: reef ]] 2025-09-11 00:16:20.701040 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-09-11 00:16:20.707145 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-09-11 00:16:20.713856 | orchestrator | + set -e 2025-09-11 00:16:20.713940 | orchestrator | + VERSION=2024.2 2025-09-11 00:16:20.714760 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-11 00:16:20.718453 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-09-11 00:16:20.718520 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-09-11 00:16:20.723854 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-09-11 00:16:20.724585 | orchestrator | ++ semver latest 7.0.0 2025-09-11 00:16:20.784197 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-11 00:16:20.784298 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-11 00:16:20.784323 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-09-11 00:16:20.784346 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-09-11 00:16:20.876847 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-11 00:16:20.877953 | orchestrator | + source /opt/venv/bin/activate 2025-09-11 00:16:20.879359 | orchestrator | ++ deactivate nondestructive 2025-09-11 00:16:20.879406 | orchestrator | ++ '[' -n '' ']' 2025-09-11 00:16:20.879430 | orchestrator | ++ '[' -n '' ']' 2025-09-11 00:16:20.879441 | orchestrator | ++ hash -r 2025-09-11 00:16:20.879461 | orchestrator | ++ '[' -n '' ']' 2025-09-11 00:16:20.879472 | orchestrator | ++ unset VIRTUAL_ENV 2025-09-11 00:16:20.879518 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-09-11 00:16:20.879531 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-09-11 00:16:20.879550 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-09-11 00:16:20.879578 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-09-11 00:16:20.879590 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-09-11 00:16:20.879601 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-09-11 00:16:20.879612 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-11 00:16:20.879624 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-11 00:16:20.879635 | orchestrator | ++ export PATH 2025-09-11 00:16:20.879669 | orchestrator | ++ '[' -n '' ']' 2025-09-11 00:16:20.879682 | orchestrator | ++ '[' -z '' ']' 2025-09-11 00:16:20.879697 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-09-11 00:16:20.879708 | orchestrator | ++ PS1='(venv) ' 2025-09-11 00:16:20.879728 | orchestrator | ++ export PS1 2025-09-11 00:16:20.879740 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-09-11 00:16:20.879751 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-09-11 00:16:20.879765 | orchestrator | ++ hash -r 2025-09-11 00:16:20.879798 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-09-11 00:16:22.224569 | orchestrator | 2025-09-11 00:16:22.224683 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-09-11 00:16:22.224700 | orchestrator | 2025-09-11 00:16:22.224712 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-11 00:16:22.836981 | orchestrator | ok: [testbed-manager] 2025-09-11 00:16:22.837094 | orchestrator | 2025-09-11 00:16:22.837110 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-11 00:16:23.786563 | orchestrator | changed: [testbed-manager] 2025-09-11 00:16:23.786669 | orchestrator | 2025-09-11 00:16:23.786687 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-09-11 00:16:23.786699 | orchestrator | 2025-09-11 00:16:23.786711 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-11 00:16:25.882388 | orchestrator | ok: [testbed-manager] 2025-09-11 00:16:25.882539 | orchestrator | 2025-09-11 00:16:25.882558 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-09-11 00:16:25.934286 | orchestrator | ok: [testbed-manager] 2025-09-11 00:16:25.934324 | orchestrator | 2025-09-11 00:16:25.934340 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-09-11 00:16:26.359725 | orchestrator | changed: [testbed-manager] 2025-09-11 00:16:26.359814 | orchestrator | 2025-09-11 00:16:26.359829 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-09-11 00:16:26.399534 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:16:26.399574 | orchestrator | 2025-09-11 00:16:26.399588 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-11 00:16:26.729919 | orchestrator | changed: [testbed-manager] 2025-09-11 00:16:26.729992 | orchestrator | 2025-09-11 00:16:26.730008 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-09-11 00:16:26.775597 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:16:26.775647 | orchestrator | 2025-09-11 00:16:26.775662 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-09-11 00:16:27.077173 | orchestrator | ok: [testbed-manager] 2025-09-11 00:16:27.077260 | orchestrator | 2025-09-11 00:16:27.077276 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-09-11 00:16:27.169450 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:16:27.169560 | orchestrator | 2025-09-11 00:16:27.169576 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-09-11 00:16:27.169588 | orchestrator | 2025-09-11 00:16:27.169601 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-11 00:16:28.754298 | orchestrator | ok: [testbed-manager] 2025-09-11 00:16:28.755000 | orchestrator | 2025-09-11 00:16:28.755039 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-09-11 00:16:28.849349 | orchestrator | included: osism.services.traefik for testbed-manager 2025-09-11 00:16:28.849412 | orchestrator | 2025-09-11 00:16:28.849426 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-09-11 00:16:28.906613 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-09-11 00:16:28.906689 | orchestrator | 2025-09-11 00:16:28.906702 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-09-11 00:16:29.884684 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-09-11 00:16:29.884764 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-09-11 00:16:29.884777 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-09-11 00:16:29.884789 | orchestrator | 2025-09-11 00:16:29.884800 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-09-11 00:16:31.442423 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-09-11 00:16:31.442541 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-09-11 00:16:31.442557 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-09-11 00:16:31.442568 | orchestrator | 2025-09-11 00:16:31.442579 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-09-11 00:16:32.018463 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-11 00:16:32.018587 | orchestrator | changed: [testbed-manager] 2025-09-11 00:16:32.018602 | orchestrator | 2025-09-11 00:16:32.018614 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-09-11 00:16:32.581882 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-11 00:16:32.581960 | orchestrator | changed: [testbed-manager] 2025-09-11 00:16:32.581974 | orchestrator | 2025-09-11 00:16:32.581985 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-09-11 00:16:32.637373 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:16:32.637441 | orchestrator | 2025-09-11 00:16:32.637453 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-09-11 00:16:32.967267 | orchestrator | ok: [testbed-manager] 2025-09-11 00:16:32.967345 | orchestrator | 2025-09-11 00:16:32.967361 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-09-11 00:16:33.038600 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-09-11 00:16:33.038674 | orchestrator | 2025-09-11 00:16:33.038688 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-09-11 00:16:33.944858 | orchestrator | changed: [testbed-manager] 2025-09-11 00:16:33.944932 | orchestrator | 2025-09-11 00:16:33.944946 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-09-11 00:16:36.628017 | orchestrator | changed: [testbed-manager] 2025-09-11 00:16:36.628121 | orchestrator | 2025-09-11 00:16:36.628138 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-09-11 00:16:47.476475 | orchestrator | changed: [testbed-manager] 2025-09-11 00:16:47.476658 | orchestrator | 2025-09-11 00:16:47.476687 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-09-11 00:16:47.525103 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:16:47.525194 | orchestrator | 2025-09-11 00:16:47.525208 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-09-11 00:16:47.525221 | orchestrator | 2025-09-11 00:16:47.525232 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-11 00:16:49.243131 | orchestrator | ok: [testbed-manager] 2025-09-11 00:16:49.243234 | orchestrator | 2025-09-11 00:16:49.243281 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-09-11 00:16:49.340575 | orchestrator | included: osism.services.manager for testbed-manager 2025-09-11 00:16:49.340671 | orchestrator | 2025-09-11 00:16:49.340696 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-09-11 00:16:49.397397 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-09-11 00:16:49.397477 | orchestrator | 2025-09-11 00:16:49.397526 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-09-11 00:16:52.929252 | orchestrator | ok: [testbed-manager] 2025-09-11 00:16:52.929347 | orchestrator | 2025-09-11 00:16:52.929360 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-09-11 00:16:52.980399 | orchestrator | ok: [testbed-manager] 2025-09-11 00:16:52.980498 | orchestrator | 2025-09-11 00:16:52.980518 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-09-11 00:16:53.099674 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-09-11 00:16:53.099773 | orchestrator | 2025-09-11 00:16:53.099790 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-09-11 00:16:55.813789 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-09-11 00:16:55.813863 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-09-11 00:16:55.813896 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-09-11 00:16:55.813907 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-09-11 00:16:55.813916 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-09-11 00:16:55.813925 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-09-11 00:16:55.813934 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-09-11 00:16:55.813943 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-09-11 00:16:55.813952 | orchestrator | 2025-09-11 00:16:55.813962 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-09-11 00:16:56.371659 | orchestrator | changed: [testbed-manager] 2025-09-11 00:16:56.371743 | orchestrator | 2025-09-11 00:16:56.371758 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-09-11 00:16:57.652221 | orchestrator | changed: [testbed-manager] 2025-09-11 00:16:57.652307 | orchestrator | 2025-09-11 00:16:57.652323 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-09-11 00:16:57.725654 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-09-11 00:16:57.725697 | orchestrator | 2025-09-11 00:16:57.725710 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-09-11 00:16:58.831368 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-09-11 00:16:58.831457 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-09-11 00:16:58.831472 | orchestrator | 2025-09-11 00:16:58.831523 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-09-11 00:16:59.402528 | orchestrator | changed: [testbed-manager] 2025-09-11 00:16:59.402611 | orchestrator | 2025-09-11 00:16:59.402627 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-09-11 00:16:59.453847 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:16:59.453887 | orchestrator | 2025-09-11 00:16:59.453900 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-09-11 00:16:59.534403 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2025-09-11 00:16:59.534518 | orchestrator | 2025-09-11 00:16:59.534536 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2025-09-11 00:17:00.103023 | orchestrator | changed: [testbed-manager] 2025-09-11 00:17:00.103102 | orchestrator | 2025-09-11 00:17:00.103116 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-09-11 00:17:00.150282 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-09-11 00:17:00.150383 | orchestrator | 2025-09-11 00:17:00.150398 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-09-11 00:17:01.354866 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-11 00:17:01.354950 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-11 00:17:01.354966 | orchestrator | changed: [testbed-manager] 2025-09-11 00:17:01.354984 | orchestrator | 2025-09-11 00:17:01.355002 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-09-11 00:17:01.914145 | orchestrator | changed: [testbed-manager] 2025-09-11 00:17:01.914226 | orchestrator | 2025-09-11 00:17:01.914240 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-09-11 00:17:01.969109 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:17:01.969154 | orchestrator | 2025-09-11 00:17:01.969167 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-09-11 00:17:02.739599 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-09-11 00:17:02.739698 | orchestrator | 2025-09-11 00:17:02.739714 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-09-11 00:17:03.192456 | orchestrator | changed: [testbed-manager] 2025-09-11 00:17:03.192584 | orchestrator | 2025-09-11 00:17:03.192599 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-09-11 00:17:03.564136 | orchestrator | changed: [testbed-manager] 2025-09-11 00:17:03.564221 | orchestrator | 2025-09-11 00:17:03.564238 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-09-11 00:17:04.658323 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-09-11 00:17:04.658416 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-09-11 00:17:04.658431 | orchestrator | 2025-09-11 00:17:04.658445 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-09-11 00:17:05.180128 | orchestrator | changed: [testbed-manager] 2025-09-11 00:17:05.180215 | orchestrator | 2025-09-11 00:17:05.180231 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-09-11 00:17:05.510793 | orchestrator | ok: [testbed-manager] 2025-09-11 00:17:05.510867 | orchestrator | 2025-09-11 00:17:05.510881 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-09-11 00:17:05.842921 | orchestrator | changed: [testbed-manager] 2025-09-11 00:17:05.843004 | orchestrator | 2025-09-11 00:17:05.843019 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-09-11 00:17:05.888518 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:17:05.888588 | orchestrator | 2025-09-11 00:17:05.888602 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-09-11 00:17:05.959610 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-09-11 00:17:05.959684 | orchestrator | 2025-09-11 00:17:05.959697 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-09-11 00:17:05.994566 | orchestrator | ok: [testbed-manager] 2025-09-11 00:17:05.994618 | orchestrator | 2025-09-11 00:17:05.994630 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-09-11 00:17:07.962070 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-09-11 00:17:07.962176 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-09-11 00:17:07.962193 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-09-11 00:17:07.962205 | orchestrator | 2025-09-11 00:17:07.962219 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-09-11 00:17:08.632040 | orchestrator | changed: [testbed-manager] 2025-09-11 00:17:08.632139 | orchestrator | 2025-09-11 00:17:08.632157 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-09-11 00:17:09.303321 | orchestrator | changed: [testbed-manager] 2025-09-11 00:17:09.303422 | orchestrator | 2025-09-11 00:17:09.303438 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-09-11 00:17:09.991242 | orchestrator | changed: [testbed-manager] 2025-09-11 00:17:09.992149 | orchestrator | 2025-09-11 00:17:09.992190 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-09-11 00:17:10.057689 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-09-11 00:17:10.057759 | orchestrator | 2025-09-11 00:17:10.057773 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-09-11 00:17:10.098452 | orchestrator | ok: [testbed-manager] 2025-09-11 00:17:10.098516 | orchestrator | 2025-09-11 00:17:10.098529 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-09-11 00:17:10.798610 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-09-11 00:17:10.798709 | orchestrator | 2025-09-11 00:17:10.798725 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-09-11 00:17:10.874062 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-09-11 00:17:10.874127 | orchestrator | 2025-09-11 00:17:10.874140 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-09-11 00:17:11.555080 | orchestrator | changed: [testbed-manager] 2025-09-11 00:17:11.555176 | orchestrator | 2025-09-11 00:17:11.555191 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-09-11 00:17:12.123096 | orchestrator | ok: [testbed-manager] 2025-09-11 00:17:12.123194 | orchestrator | 2025-09-11 00:17:12.123209 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-09-11 00:17:12.175720 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:17:12.175787 | orchestrator | 2025-09-11 00:17:12.175800 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-09-11 00:17:12.233441 | orchestrator | ok: [testbed-manager] 2025-09-11 00:17:12.233536 | orchestrator | 2025-09-11 00:17:12.233550 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-09-11 00:17:13.058304 | orchestrator | changed: [testbed-manager] 2025-09-11 00:17:13.058406 | orchestrator | 2025-09-11 00:17:13.058421 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-09-11 00:18:39.965632 | orchestrator | changed: [testbed-manager] 2025-09-11 00:18:39.965748 | orchestrator | 2025-09-11 00:18:39.965765 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-09-11 00:18:41.009692 | orchestrator | ok: [testbed-manager] 2025-09-11 00:18:41.009787 | orchestrator | 2025-09-11 00:18:41.009802 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-09-11 00:18:41.054780 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:18:41.054828 | orchestrator | 2025-09-11 00:18:41.054845 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-09-11 00:19:06.572646 | orchestrator | changed: [testbed-manager] 2025-09-11 00:19:06.572752 | orchestrator | 2025-09-11 00:19:06.572768 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-09-11 00:19:06.631219 | orchestrator | ok: [testbed-manager] 2025-09-11 00:19:06.631289 | orchestrator | 2025-09-11 00:19:06.631303 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-11 00:19:06.631315 | orchestrator | 2025-09-11 00:19:06.631326 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-09-11 00:19:06.672842 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:19:06.672884 | orchestrator | 2025-09-11 00:19:06.672897 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-09-11 00:20:06.718528 | orchestrator | Pausing for 60 seconds 2025-09-11 00:20:06.718628 | orchestrator | changed: [testbed-manager] 2025-09-11 00:20:06.718644 | orchestrator | 2025-09-11 00:20:06.718658 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-09-11 00:20:10.726209 | orchestrator | changed: [testbed-manager] 2025-09-11 00:20:10.726297 | orchestrator | 2025-09-11 00:20:10.726314 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-09-11 00:20:52.319941 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-09-11 00:20:52.320039 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-09-11 00:20:52.320049 | orchestrator | changed: [testbed-manager] 2025-09-11 00:20:52.320081 | orchestrator | 2025-09-11 00:20:52.320089 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-09-11 00:21:01.640767 | orchestrator | changed: [testbed-manager] 2025-09-11 00:21:01.640875 | orchestrator | 2025-09-11 00:21:01.640894 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-09-11 00:21:01.718313 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-09-11 00:21:01.718421 | orchestrator | 2025-09-11 00:21:01.718445 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-11 00:21:01.718467 | orchestrator | 2025-09-11 00:21:01.718527 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-09-11 00:21:01.763542 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:21:01.763607 | orchestrator | 2025-09-11 00:21:01.763620 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:21:01.763633 | orchestrator | testbed-manager : ok=66 changed=36 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-09-11 00:21:01.763645 | orchestrator | 2025-09-11 00:21:01.849987 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-11 00:21:01.850103 | orchestrator | + deactivate 2025-09-11 00:21:01.850117 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-09-11 00:21:01.850131 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-11 00:21:01.850142 | orchestrator | + export PATH 2025-09-11 00:21:01.850153 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-09-11 00:21:01.850165 | orchestrator | + '[' -n '' ']' 2025-09-11 00:21:01.850176 | orchestrator | + hash -r 2025-09-11 00:21:01.850210 | orchestrator | + '[' -n '' ']' 2025-09-11 00:21:01.850221 | orchestrator | + unset VIRTUAL_ENV 2025-09-11 00:21:01.850232 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-09-11 00:21:01.850244 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-09-11 00:21:01.850255 | orchestrator | + unset -f deactivate 2025-09-11 00:21:01.850266 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-09-11 00:21:01.856275 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-11 00:21:01.856298 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-11 00:21:01.856309 | orchestrator | + local max_attempts=60 2025-09-11 00:21:01.856320 | orchestrator | + local name=ceph-ansible 2025-09-11 00:21:01.856331 | orchestrator | + local attempt_num=1 2025-09-11 00:21:01.857622 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-11 00:21:01.897190 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-11 00:21:01.897245 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-11 00:21:01.897258 | orchestrator | + local max_attempts=60 2025-09-11 00:21:01.897269 | orchestrator | + local name=kolla-ansible 2025-09-11 00:21:01.897280 | orchestrator | + local attempt_num=1 2025-09-11 00:21:01.898319 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-11 00:21:01.940367 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-11 00:21:01.940406 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-11 00:21:01.940418 | orchestrator | + local max_attempts=60 2025-09-11 00:21:01.940429 | orchestrator | + local name=osism-ansible 2025-09-11 00:21:01.940440 | orchestrator | + local attempt_num=1 2025-09-11 00:21:01.941360 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-11 00:21:01.981858 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-11 00:21:01.981896 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-11 00:21:01.981908 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-11 00:21:02.658214 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-09-11 00:21:02.846187 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-09-11 00:21:02.846274 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-09-11 00:21:02.846286 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-09-11 00:21:02.846319 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-09-11 00:21:02.846330 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-09-11 00:21:02.846346 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-09-11 00:21:02.846355 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-09-11 00:21:02.846363 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 52 seconds (healthy) 2025-09-11 00:21:02.846371 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-09-11 00:21:02.846379 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-09-11 00:21:02.846386 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-09-11 00:21:02.846394 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-09-11 00:21:02.846402 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-09-11 00:21:02.846410 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2025-09-11 00:21:02.846418 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-09-11 00:21:02.846426 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-09-11 00:21:02.852017 | orchestrator | ++ semver latest 7.0.0 2025-09-11 00:21:02.903451 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-11 00:21:02.903542 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-11 00:21:02.903556 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-09-11 00:21:02.907675 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-09-11 00:21:15.129403 | orchestrator | 2025-09-11 00:21:15 | INFO  | Task 95295566-67c7-446d-8b7b-9b74f32699fc (resolvconf) was prepared for execution. 2025-09-11 00:21:15.129586 | orchestrator | 2025-09-11 00:21:15 | INFO  | It takes a moment until task 95295566-67c7-446d-8b7b-9b74f32699fc (resolvconf) has been started and output is visible here. 2025-09-11 00:21:29.324570 | orchestrator | 2025-09-11 00:21:29.324714 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-09-11 00:21:29.324747 | orchestrator | 2025-09-11 00:21:29.324767 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-11 00:21:29.324824 | orchestrator | Thursday 11 September 2025 00:21:18 +0000 (0:00:00.145) 0:00:00.145 **** 2025-09-11 00:21:29.324844 | orchestrator | ok: [testbed-manager] 2025-09-11 00:21:29.324864 | orchestrator | 2025-09-11 00:21:29.324882 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-11 00:21:29.324901 | orchestrator | Thursday 11 September 2025 00:21:23 +0000 (0:00:04.619) 0:00:04.764 **** 2025-09-11 00:21:29.324919 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:21:29.324937 | orchestrator | 2025-09-11 00:21:29.324955 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-11 00:21:29.324973 | orchestrator | Thursday 11 September 2025 00:21:23 +0000 (0:00:00.077) 0:00:04.842 **** 2025-09-11 00:21:29.324993 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-09-11 00:21:29.325014 | orchestrator | 2025-09-11 00:21:29.325035 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-11 00:21:29.325057 | orchestrator | Thursday 11 September 2025 00:21:23 +0000 (0:00:00.075) 0:00:04.917 **** 2025-09-11 00:21:29.325077 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-09-11 00:21:29.325097 | orchestrator | 2025-09-11 00:21:29.325117 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-11 00:21:29.325139 | orchestrator | Thursday 11 September 2025 00:21:23 +0000 (0:00:00.071) 0:00:04.988 **** 2025-09-11 00:21:29.325159 | orchestrator | ok: [testbed-manager] 2025-09-11 00:21:29.325177 | orchestrator | 2025-09-11 00:21:29.325195 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-11 00:21:29.325214 | orchestrator | Thursday 11 September 2025 00:21:24 +0000 (0:00:01.018) 0:00:06.007 **** 2025-09-11 00:21:29.325232 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:21:29.325250 | orchestrator | 2025-09-11 00:21:29.325267 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-11 00:21:29.325286 | orchestrator | Thursday 11 September 2025 00:21:24 +0000 (0:00:00.044) 0:00:06.052 **** 2025-09-11 00:21:29.325303 | orchestrator | ok: [testbed-manager] 2025-09-11 00:21:29.325321 | orchestrator | 2025-09-11 00:21:29.325340 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-11 00:21:29.325357 | orchestrator | Thursday 11 September 2025 00:21:25 +0000 (0:00:00.492) 0:00:06.544 **** 2025-09-11 00:21:29.325375 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:21:29.325394 | orchestrator | 2025-09-11 00:21:29.325412 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-11 00:21:29.325430 | orchestrator | Thursday 11 September 2025 00:21:25 +0000 (0:00:00.077) 0:00:06.621 **** 2025-09-11 00:21:29.325446 | orchestrator | changed: [testbed-manager] 2025-09-11 00:21:29.325462 | orchestrator | 2025-09-11 00:21:29.325514 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-11 00:21:29.325534 | orchestrator | Thursday 11 September 2025 00:21:25 +0000 (0:00:00.516) 0:00:07.138 **** 2025-09-11 00:21:29.325553 | orchestrator | changed: [testbed-manager] 2025-09-11 00:21:29.325571 | orchestrator | 2025-09-11 00:21:29.325590 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-11 00:21:29.325607 | orchestrator | Thursday 11 September 2025 00:21:26 +0000 (0:00:01.001) 0:00:08.139 **** 2025-09-11 00:21:29.325625 | orchestrator | ok: [testbed-manager] 2025-09-11 00:21:29.325642 | orchestrator | 2025-09-11 00:21:29.325659 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-11 00:21:29.325677 | orchestrator | Thursday 11 September 2025 00:21:27 +0000 (0:00:00.943) 0:00:09.082 **** 2025-09-11 00:21:29.325712 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-09-11 00:21:29.325750 | orchestrator | 2025-09-11 00:21:29.325772 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-11 00:21:29.325789 | orchestrator | Thursday 11 September 2025 00:21:27 +0000 (0:00:00.084) 0:00:09.167 **** 2025-09-11 00:21:29.325806 | orchestrator | changed: [testbed-manager] 2025-09-11 00:21:29.325817 | orchestrator | 2025-09-11 00:21:29.325827 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:21:29.325839 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-11 00:21:29.325850 | orchestrator | 2025-09-11 00:21:29.325861 | orchestrator | 2025-09-11 00:21:29.325871 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:21:29.325882 | orchestrator | Thursday 11 September 2025 00:21:29 +0000 (0:00:01.125) 0:00:10.293 **** 2025-09-11 00:21:29.325893 | orchestrator | =============================================================================== 2025-09-11 00:21:29.325903 | orchestrator | Gathering Facts --------------------------------------------------------- 4.62s 2025-09-11 00:21:29.325914 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.13s 2025-09-11 00:21:29.325924 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.02s 2025-09-11 00:21:29.325935 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.00s 2025-09-11 00:21:29.325945 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.94s 2025-09-11 00:21:29.325956 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.52s 2025-09-11 00:21:29.325992 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.49s 2025-09-11 00:21:29.326003 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-09-11 00:21:29.326014 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.08s 2025-09-11 00:21:29.326082 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-09-11 00:21:29.326093 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2025-09-11 00:21:29.326103 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2025-09-11 00:21:29.326114 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.04s 2025-09-11 00:21:29.584444 | orchestrator | + osism apply sshconfig 2025-09-11 00:21:41.512100 | orchestrator | 2025-09-11 00:21:41 | INFO  | Task 9d1bacce-118f-44c2-a918-dc5a7c0d512c (sshconfig) was prepared for execution. 2025-09-11 00:21:41.512217 | orchestrator | 2025-09-11 00:21:41 | INFO  | It takes a moment until task 9d1bacce-118f-44c2-a918-dc5a7c0d512c (sshconfig) has been started and output is visible here. 2025-09-11 00:21:51.846837 | orchestrator | 2025-09-11 00:21:51.846953 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-09-11 00:21:51.846969 | orchestrator | 2025-09-11 00:21:51.846981 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-09-11 00:21:51.846993 | orchestrator | Thursday 11 September 2025 00:21:45 +0000 (0:00:00.119) 0:00:00.119 **** 2025-09-11 00:21:51.847004 | orchestrator | ok: [testbed-manager] 2025-09-11 00:21:51.847016 | orchestrator | 2025-09-11 00:21:51.847027 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-09-11 00:21:51.847038 | orchestrator | Thursday 11 September 2025 00:21:45 +0000 (0:00:00.502) 0:00:00.621 **** 2025-09-11 00:21:51.847049 | orchestrator | changed: [testbed-manager] 2025-09-11 00:21:51.847060 | orchestrator | 2025-09-11 00:21:51.847072 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-09-11 00:21:51.847083 | orchestrator | Thursday 11 September 2025 00:21:45 +0000 (0:00:00.437) 0:00:01.059 **** 2025-09-11 00:21:51.847095 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-09-11 00:21:51.847105 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-09-11 00:21:51.847145 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-09-11 00:21:51.847157 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-09-11 00:21:51.847168 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-09-11 00:21:51.847195 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-09-11 00:21:51.847206 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-09-11 00:21:51.847217 | orchestrator | 2025-09-11 00:21:51.847228 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-09-11 00:21:51.847239 | orchestrator | Thursday 11 September 2025 00:21:51 +0000 (0:00:05.030) 0:00:06.090 **** 2025-09-11 00:21:51.847249 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:21:51.847260 | orchestrator | 2025-09-11 00:21:51.847270 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-09-11 00:21:51.847281 | orchestrator | Thursday 11 September 2025 00:21:51 +0000 (0:00:00.069) 0:00:06.159 **** 2025-09-11 00:21:51.847292 | orchestrator | changed: [testbed-manager] 2025-09-11 00:21:51.847302 | orchestrator | 2025-09-11 00:21:51.847313 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:21:51.847324 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-11 00:21:51.847336 | orchestrator | 2025-09-11 00:21:51.847346 | orchestrator | 2025-09-11 00:21:51.847357 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:21:51.847368 | orchestrator | Thursday 11 September 2025 00:21:51 +0000 (0:00:00.543) 0:00:06.703 **** 2025-09-11 00:21:51.847381 | orchestrator | =============================================================================== 2025-09-11 00:21:51.847393 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.03s 2025-09-11 00:21:51.847406 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.54s 2025-09-11 00:21:51.847418 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.50s 2025-09-11 00:21:51.847430 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.44s 2025-09-11 00:21:51.847442 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-09-11 00:21:52.097414 | orchestrator | + osism apply known-hosts 2025-09-11 00:22:04.025086 | orchestrator | 2025-09-11 00:22:04 | INFO  | Task 2a551ab9-1273-49b7-be91-28b6709bcab0 (known-hosts) was prepared for execution. 2025-09-11 00:22:04.025193 | orchestrator | 2025-09-11 00:22:04 | INFO  | It takes a moment until task 2a551ab9-1273-49b7-be91-28b6709bcab0 (known-hosts) has been started and output is visible here. 2025-09-11 00:22:20.899368 | orchestrator | 2025-09-11 00:22:20.899524 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-09-11 00:22:20.899543 | orchestrator | 2025-09-11 00:22:20.899555 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-09-11 00:22:20.899567 | orchestrator | Thursday 11 September 2025 00:22:07 +0000 (0:00:00.120) 0:00:00.120 **** 2025-09-11 00:22:20.899579 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-11 00:22:20.899590 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-11 00:22:20.899601 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-11 00:22:20.899611 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-11 00:22:20.899622 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-11 00:22:20.899633 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-11 00:22:20.899643 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-11 00:22:20.899654 | orchestrator | 2025-09-11 00:22:20.899665 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-09-11 00:22:20.899677 | orchestrator | Thursday 11 September 2025 00:22:12 +0000 (0:00:05.504) 0:00:05.624 **** 2025-09-11 00:22:20.899715 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-11 00:22:20.899729 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-11 00:22:20.899740 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-11 00:22:20.899750 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-11 00:22:20.899761 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-11 00:22:20.899781 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-11 00:22:20.899793 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-11 00:22:20.899804 | orchestrator | 2025-09-11 00:22:20.899814 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-11 00:22:20.899825 | orchestrator | Thursday 11 September 2025 00:22:12 +0000 (0:00:00.142) 0:00:05.766 **** 2025-09-11 00:22:20.899837 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBMw6rg1W+0yhhwMxcLiEuQveBqZYxXZi2PvEztsgQMU5x97MBmycGECaNP9MIRLn9dnUNpSsyMZyq82AROrk2M=) 2025-09-11 00:22:20.899853 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfEyHneG0UYAnvPVHpphG0nepF3UT9TQyPt+d2szjgZuetKmr0ptfTvpAEUQ2m5P7TmUaBGC0YOvkjrb3R1GzU5i2p6h4mBeYYMmWm6F607OfHzBrL34NNL7duGLYV4Nm31XOcg9mj8kDSiS+pPWJDXYI8F8COwvQif75Vzuo+afEJqpDAOAE2hvWyJk6MkqxRt2MdGQZ7s8KEG6iGKgKYB6jOWrvPL1xN+oJ+saZxZyowINs/6F4ac68zXoG09KKBCNSgTzGSzrwYb/wDFiv0m8Pwc2Wku+DmMgkTnMPtIgPU2bxncNDevXVy2paejVsqR0AHFt3SLP6T27gOkZZI1AnJyq9v2LzYUZrYv25DKoGVBYZkO1ro3dhxQ0XL+vwbY5WeGjHZR+OSJ3us+DA9w3ewvOYE8ZIw4JK1vRS6HTq9XuocqBOX5C/tZKKDWiVLjLq083HdSCQ2AelBWLO7j6dn4Y9gZcZoyeTNY3yLhJ54cVYGqDGXCtp8L9+Ibz0=) 2025-09-11 00:22:20.899868 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICPEeh4vXsySyuVDMSYDuCMwZC39zr5MfeYlymeBBLOl) 2025-09-11 00:22:20.899881 | orchestrator | 2025-09-11 00:22:20.899893 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-11 00:22:20.899905 | orchestrator | Thursday 11 September 2025 00:22:13 +0000 (0:00:01.017) 0:00:06.784 **** 2025-09-11 00:22:20.899938 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDH3QprxnmNC26sKlt/mMSq9G2zvTl4QXwyP/Yr3CpLsfvX3tbEulmOGRBqIGa/+szLas7HRYNS1EIiJhFKDI8bXi12Cs9R7b7RwDe1/fxWC5if0zUy1O2L/tE363xSeS3fRqdXk148S+up7L3X4ynKr2qOFmMUO06P/My0sr2Jogd/tWeG4ZjqBFgSncvoe2Xx613SPaDUC0k3VN2TVrpyJCJj5D7NcpukTwtJGP9XKB71RbuRvPTpk+g5HI/TNGPpfmEKFR5Rq/+IMrlyd6ygYtZHoT3rGcepGCVPdCR4arajT2Chs71VhDK1VJfozaeqgPy45TlAcwyZN65t+8nK9JZ5khYJfCs7tAOHtu2JC2frbVcFaF3uF4K3oFl4qs5qb0nsDmOXy2vB3lFP8qR7jBqoEz5ZOAXUlDfcSlE4GRXhhqQwkCThIWcvYCB5h6CE6/GKVz+ype1R4wfdJGs29hoTt2BxsYlbD19bBe4Eq7r3fSou5LJ3sY+nYdikm4c=) 2025-09-11 00:22:20.899953 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPOYyND9YuWELMGYk1ViTpa77P0BI8KdFcolnTGEHrCp+ruU1Ab534OMEbC8Gk8A0Cg+rTWNIZ1cgW+QLJk1HDQ=) 2025-09-11 00:22:20.899966 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHMX5oc1QAPmwqA7ENnlTjiKanQFb4oMTerHcWMxTwFm) 2025-09-11 00:22:20.899988 | orchestrator | 2025-09-11 00:22:20.900000 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-11 00:22:20.900013 | orchestrator | Thursday 11 September 2025 00:22:15 +0000 (0:00:01.969) 0:00:08.754 **** 2025-09-11 00:22:20.900026 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCpjkx2XsP2SdravudGUik8m60DLJu4/kSKVKSs8mgLDnHt3nIh/TS2S1Idad4aPjNzRmn4EbxGSUIYxho5oY9qA7m5kK9MUEqr63BQRHMcD+WlhHLCIan+ck1DdZSjL4ty8H75LMfCBrAUqhqQO1HIlyljkjQar9a+/PNg1WFngapojUBTjAKoGVAqjOq+wuPfrM3Vlbpkgaba2EG9I0L9FV3nbxZhTTjNdtO5rU9c7MnBHTP2s4wuBq2vnJg1LbOYTJ9IU3EBYSCChLd5bf8yLxZTHQff0qgAXHc+gYV1Ynz+vulwHLONK/oQ62xQcOaIvWHoN0uKvqkwC3q4VKeRzFfohbi5882XCFDRQ5YbrVGC1GjXyMvFUQdaoBOqCt7XMJdMmtX4iG2T0wS9mjr+EBE5UJqsgAFgvmxXmpOI9fxGxwttJKpw0WODt6K9kCIxENfsvKDG4vV9O7m32Yi/5I2SzMhZxZgi6IBqXBw6aE0O3944tEt3a2JjKGhJ1O8=) 2025-09-11 00:22:20.900039 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBvjDWzYxuxhme3rd9mlFeZlDpGos0/xCCbAFjbiCb+N) 2025-09-11 00:22:20.900052 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHseSbiv/+H5CGi1J9Z6B9pFECkSq0gpa5rkg5IsXwQ0xenVkR66XOzXqTgzKLIe3MCJU8jYRblRdv2rDdRdV/o=) 2025-09-11 00:22:20.900064 | orchestrator | 2025-09-11 00:22:20.900077 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-11 00:22:20.900090 | orchestrator | Thursday 11 September 2025 00:22:16 +0000 (0:00:00.997) 0:00:09.751 **** 2025-09-11 00:22:20.900102 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN0w2G9brWqdUGENsdPb8GNBEEPdzOuW1MJdDzs+1XDJ) 2025-09-11 00:22:20.900181 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDI0k96rnOFLDhgKvsqTbBCipJSOupOtclAgNmMGcoNexeK/CI45BRvPZWsBLDDYg7MPqxPPSh4UPwt8+CF4VhE5nKwdknNNv93+w4rrWfHarArco2UjCeanEGa3VuMDnw8EHByFpLMwBCIwzueTG/WpYICD9LkhF3WxivhofQqJEJseLUPX5B8+uhXA7/L8BnngjtJBoGOAcYafVqphOuYLfdwQgD/cGpkcj6NIlK11yHpPz8cfGw1adrkRmHLqWqz2MXlg7MVyQzrYveQrM9GO7/EHuj5li4agdsmwBNB7stcnLgqLdoBtmMLb4vk5Rf2o1wF/n7DLwkfK4CDCKuKhBUrU2/JO7g9DdvcQF/+RngwHoEqhbemGrjSnmcK3UneURWtfzYBg6oQ+mxtJB4ueyu3wJLe5IL32xpOt+Khoyq+TeBxZXcpn9tqh6ADeYzkr7xyr+EA152WB2Ywm3axalXxGO895G8EtxJyT6bq2RwgGF+BVwXAmAu61YTsW5U=) 2025-09-11 00:22:20.900196 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJQ8wkBAn3gylocvMgUX0YPJhET7Ge+ghFu079hQXhf6pPJl6fAexxIJxgZjBe+tqvvpKnL1jLA92hNVD94SDBQ=) 2025-09-11 00:22:20.900208 | orchestrator | 2025-09-11 00:22:20.900221 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-11 00:22:20.900234 | orchestrator | Thursday 11 September 2025 00:22:18 +0000 (0:00:02.029) 0:00:11.781 **** 2025-09-11 00:22:20.900246 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZCmNiiX5XMv1M5COyrTF9RrikZ7WW9F1C8tslM3ZFUdge//atiJHT9utUbLgBXXWY3kpKw6P9aEL93p1Hs5QELuUtg8cjBM/xiSgifdNC+rpR3bJbq6WaA5EzOM/kRWtUaX40lO/C4swiMMsclFUMygpXrgPp9L1YvFCwuOJZwYr82PsUrOXVRZ5JnvQoINNfnAISd2hq17klKtdl148vCOf1WB4MexvMtq0pA454FJw2ngG94kLT8NOSDaE799qORLObgfEsQPMhd/DlXmci/OGI58ljDgvf5YEf4RhQXsawlkCoJkQgCGmN6DLFarZxgLNjPLHMtNwn/Koyyi6e6JCOxgvYcPnFp6xIGcbgtVS+PgmtqZGkODQj/fi1U+tmj3ePdOz2yE+WCxgme7w1EYXmAJIVcIzAtUr4Chx/mn967gz56LWjkml5kI4YEiisk41/0tT+UttFXh9UKy8oacDxqZ3eoDmHFZ3/sHy52pmA/+dxvrRPRJJ4JbpAczU=) 2025-09-11 00:22:20.900257 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBABJ5cvcMpYZ1wBIm4B1EConHat98dyh/yC4ntJoC+oBSdo+YWMl3Mhtl4tS92eq5btUyly2RZ0pMUT9lCJIKrw=) 2025-09-11 00:22:20.900268 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDxak1QYRtVBoQ5h5CRLVPFyauSWh1VJDXUxN0ILa1UY) 2025-09-11 00:22:20.900286 | orchestrator | 2025-09-11 00:22:20.900297 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-11 00:22:20.900308 | orchestrator | Thursday 11 September 2025 00:22:19 +0000 (0:00:01.004) 0:00:12.786 **** 2025-09-11 00:22:20.900328 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDjp7vEua6dyUZM9v0slEeLKmgfWNRusuGqXb8wgNO2Q7pj9K2m8zzLRgt0mv+roM576TR82sSWyxIRX2UXvc1NJqFV/Fq4tGH2jZLlwRkyNmlfQ3SSs67notm0noOFP0F4jp3PEnHCwjDXHrL54SxSa4zmz0P2abaODo/goOv7Ce/SeABW5TlMbMeddMoAODq35EwF6h1pfZ6ZSnxQC7eNqH4xJry4tqRwuDpX2sK2f3djixbFeLMRRUT3hDs5KIOG4Abfmumk+oorxoqFWKw5fu6SvghSso+0WR854fIXYGo3gjDaDG9Mc/bjObyM81eTeYiocLTrDmKktbXJ7gDlniipUcCoB8Kqa99P+DYTBpODv1ckRTLIsf+5b2lNOfOk6nYioi12HNKJfujAmNKElEXcpZkuqN+X6CVrl/lTBWLHapbmujA/3DS/bwWkPUSNEY1/hj2osh449fsPD1AJB1YkPeZ4jkmNY46Vi9PSQYx+MEmcpgHfK7I9GwNWN8M=) 2025-09-11 00:22:31.249723 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIMF4BWYotaWadAii2m9k2a/j2nNXCQJ5BRdEiG2hbTusi8WcuEn7ba/2UQP6Cpw8+66sa9rXvLUS4OQenzK4Oo=) 2025-09-11 00:22:31.249843 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFQh+apviBNOMe+rRD4SyUATe5sPfSVhkfyzIIHUc3NJ) 2025-09-11 00:22:31.249861 | orchestrator | 2025-09-11 00:22:31.249875 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-11 00:22:31.249887 | orchestrator | Thursday 11 September 2025 00:22:20 +0000 (0:00:01.018) 0:00:13.804 **** 2025-09-11 00:22:31.249901 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBRsl9QGYroNrxw4pCNbVuE2clC8/La+CaqfM2dUx3PvqVGhutnuHVJlXaNx1ISQYBJ1P2wlG5N9Fbv1weIRqMlBpJo/PMGu+2rREM04c5W3n8nqd9AutJeOPyyZ+PZZCZRUaeGmCLgjlRZmhIadNOnOgnh5MeQ+xkT74TwUkBr20OAA2rxkO4C7lhBg2RPgucyNRirXuDTQ05M5QQCFU+F7twNAMjRxaOHIyoYp+FBJnkD3x+ZcZsJDxb6LsbZjk1QPjAuQJ9dugBExS9p7uNymQVPikBV92az/x4aDLngKGmzef09mSVcnEYx6NrSYNRwNsZlX0Ns38/wob4GYor+ubz4I+GIBmrGH7LoiPW89oWo55HmXCy02W2r+3CvR18PEuLDBJPAfOzOJN/t7eO+jfBZq7ZcFp5t1OaUgi1gK3c41tM89jD5AeyIE8PsNgaAb150fTmrXcLt/mnYbnFVVkkdwgmetn+gdfwnE3npEk7nOw/F52ePwpM75kp9S0=) 2025-09-11 00:22:31.249916 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG+aZeFhNXexXYJlC6/4cIHALjKTX6+nSnF+6VaKlZwD20FvRh/1ZxvbXOHV4QZERZUjfeo+PVZKd4tP/QMivSU=) 2025-09-11 00:22:31.249928 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILAaSjUYMNvHa4dCreDKIz2WWs9pJHTkMPLu34bXEbq1) 2025-09-11 00:22:31.249939 | orchestrator | 2025-09-11 00:22:31.249950 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-09-11 00:22:31.249961 | orchestrator | Thursday 11 September 2025 00:22:21 +0000 (0:00:00.988) 0:00:14.793 **** 2025-09-11 00:22:31.249973 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-11 00:22:31.249984 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-11 00:22:31.249994 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-11 00:22:31.250005 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-11 00:22:31.250068 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-11 00:22:31.250080 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-11 00:22:31.250091 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-11 00:22:31.250101 | orchestrator | 2025-09-11 00:22:31.250112 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-09-11 00:22:31.250124 | orchestrator | Thursday 11 September 2025 00:22:27 +0000 (0:00:05.150) 0:00:19.943 **** 2025-09-11 00:22:31.250154 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-11 00:22:31.250888 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-11 00:22:31.250938 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-11 00:22:31.250950 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-11 00:22:31.250961 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-11 00:22:31.250972 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-11 00:22:31.250982 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-11 00:22:31.250993 | orchestrator | 2025-09-11 00:22:31.251004 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-11 00:22:31.251014 | orchestrator | Thursday 11 September 2025 00:22:27 +0000 (0:00:00.161) 0:00:20.104 **** 2025-09-11 00:22:31.251025 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICPEeh4vXsySyuVDMSYDuCMwZC39zr5MfeYlymeBBLOl) 2025-09-11 00:22:31.251062 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfEyHneG0UYAnvPVHpphG0nepF3UT9TQyPt+d2szjgZuetKmr0ptfTvpAEUQ2m5P7TmUaBGC0YOvkjrb3R1GzU5i2p6h4mBeYYMmWm6F607OfHzBrL34NNL7duGLYV4Nm31XOcg9mj8kDSiS+pPWJDXYI8F8COwvQif75Vzuo+afEJqpDAOAE2hvWyJk6MkqxRt2MdGQZ7s8KEG6iGKgKYB6jOWrvPL1xN+oJ+saZxZyowINs/6F4ac68zXoG09KKBCNSgTzGSzrwYb/wDFiv0m8Pwc2Wku+DmMgkTnMPtIgPU2bxncNDevXVy2paejVsqR0AHFt3SLP6T27gOkZZI1AnJyq9v2LzYUZrYv25DKoGVBYZkO1ro3dhxQ0XL+vwbY5WeGjHZR+OSJ3us+DA9w3ewvOYE8ZIw4JK1vRS6HTq9XuocqBOX5C/tZKKDWiVLjLq083HdSCQ2AelBWLO7j6dn4Y9gZcZoyeTNY3yLhJ54cVYGqDGXCtp8L9+Ibz0=) 2025-09-11 00:22:31.251075 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBMw6rg1W+0yhhwMxcLiEuQveBqZYxXZi2PvEztsgQMU5x97MBmycGECaNP9MIRLn9dnUNpSsyMZyq82AROrk2M=) 2025-09-11 00:22:31.251087 | orchestrator | 2025-09-11 00:22:31.251098 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-11 00:22:31.251108 | orchestrator | Thursday 11 September 2025 00:22:28 +0000 (0:00:01.004) 0:00:21.109 **** 2025-09-11 00:22:31.251120 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDH3QprxnmNC26sKlt/mMSq9G2zvTl4QXwyP/Yr3CpLsfvX3tbEulmOGRBqIGa/+szLas7HRYNS1EIiJhFKDI8bXi12Cs9R7b7RwDe1/fxWC5if0zUy1O2L/tE363xSeS3fRqdXk148S+up7L3X4ynKr2qOFmMUO06P/My0sr2Jogd/tWeG4ZjqBFgSncvoe2Xx613SPaDUC0k3VN2TVrpyJCJj5D7NcpukTwtJGP9XKB71RbuRvPTpk+g5HI/TNGPpfmEKFR5Rq/+IMrlyd6ygYtZHoT3rGcepGCVPdCR4arajT2Chs71VhDK1VJfozaeqgPy45TlAcwyZN65t+8nK9JZ5khYJfCs7tAOHtu2JC2frbVcFaF3uF4K3oFl4qs5qb0nsDmOXy2vB3lFP8qR7jBqoEz5ZOAXUlDfcSlE4GRXhhqQwkCThIWcvYCB5h6CE6/GKVz+ype1R4wfdJGs29hoTt2BxsYlbD19bBe4Eq7r3fSou5LJ3sY+nYdikm4c=) 2025-09-11 00:22:31.251131 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPOYyND9YuWELMGYk1ViTpa77P0BI8KdFcolnTGEHrCp+ruU1Ab534OMEbC8Gk8A0Cg+rTWNIZ1cgW+QLJk1HDQ=) 2025-09-11 00:22:31.251142 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHMX5oc1QAPmwqA7ENnlTjiKanQFb4oMTerHcWMxTwFm) 2025-09-11 00:22:31.251153 | orchestrator | 2025-09-11 00:22:31.251164 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-11 00:22:31.251174 | orchestrator | Thursday 11 September 2025 00:22:29 +0000 (0:00:01.020) 0:00:22.129 **** 2025-09-11 00:22:31.251194 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBvjDWzYxuxhme3rd9mlFeZlDpGos0/xCCbAFjbiCb+N) 2025-09-11 00:22:31.251205 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCpjkx2XsP2SdravudGUik8m60DLJu4/kSKVKSs8mgLDnHt3nIh/TS2S1Idad4aPjNzRmn4EbxGSUIYxho5oY9qA7m5kK9MUEqr63BQRHMcD+WlhHLCIan+ck1DdZSjL4ty8H75LMfCBrAUqhqQO1HIlyljkjQar9a+/PNg1WFngapojUBTjAKoGVAqjOq+wuPfrM3Vlbpkgaba2EG9I0L9FV3nbxZhTTjNdtO5rU9c7MnBHTP2s4wuBq2vnJg1LbOYTJ9IU3EBYSCChLd5bf8yLxZTHQff0qgAXHc+gYV1Ynz+vulwHLONK/oQ62xQcOaIvWHoN0uKvqkwC3q4VKeRzFfohbi5882XCFDRQ5YbrVGC1GjXyMvFUQdaoBOqCt7XMJdMmtX4iG2T0wS9mjr+EBE5UJqsgAFgvmxXmpOI9fxGxwttJKpw0WODt6K9kCIxENfsvKDG4vV9O7m32Yi/5I2SzMhZxZgi6IBqXBw6aE0O3944tEt3a2JjKGhJ1O8=) 2025-09-11 00:22:31.251217 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHseSbiv/+H5CGi1J9Z6B9pFECkSq0gpa5rkg5IsXwQ0xenVkR66XOzXqTgzKLIe3MCJU8jYRblRdv2rDdRdV/o=) 2025-09-11 00:22:31.251227 | orchestrator | 2025-09-11 00:22:31.251238 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-11 00:22:31.251249 | orchestrator | Thursday 11 September 2025 00:22:30 +0000 (0:00:01.031) 0:00:23.160 **** 2025-09-11 00:22:31.251267 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDI0k96rnOFLDhgKvsqTbBCipJSOupOtclAgNmMGcoNexeK/CI45BRvPZWsBLDDYg7MPqxPPSh4UPwt8+CF4VhE5nKwdknNNv93+w4rrWfHarArco2UjCeanEGa3VuMDnw8EHByFpLMwBCIwzueTG/WpYICD9LkhF3WxivhofQqJEJseLUPX5B8+uhXA7/L8BnngjtJBoGOAcYafVqphOuYLfdwQgD/cGpkcj6NIlK11yHpPz8cfGw1adrkRmHLqWqz2MXlg7MVyQzrYveQrM9GO7/EHuj5li4agdsmwBNB7stcnLgqLdoBtmMLb4vk5Rf2o1wF/n7DLwkfK4CDCKuKhBUrU2/JO7g9DdvcQF/+RngwHoEqhbemGrjSnmcK3UneURWtfzYBg6oQ+mxtJB4ueyu3wJLe5IL32xpOt+Khoyq+TeBxZXcpn9tqh6ADeYzkr7xyr+EA152WB2Ywm3axalXxGO895G8EtxJyT6bq2RwgGF+BVwXAmAu61YTsW5U=) 2025-09-11 00:22:31.251278 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN0w2G9brWqdUGENsdPb8GNBEEPdzOuW1MJdDzs+1XDJ) 2025-09-11 00:22:31.251299 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJQ8wkBAn3gylocvMgUX0YPJhET7Ge+ghFu079hQXhf6pPJl6fAexxIJxgZjBe+tqvvpKnL1jLA92hNVD94SDBQ=) 2025-09-11 00:22:35.182682 | orchestrator | 2025-09-11 00:22:35.182782 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-11 00:22:35.182798 | orchestrator | Thursday 11 September 2025 00:22:31 +0000 (0:00:00.994) 0:00:24.155 **** 2025-09-11 00:22:35.182811 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDxak1QYRtVBoQ5h5CRLVPFyauSWh1VJDXUxN0ILa1UY) 2025-09-11 00:22:35.182826 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZCmNiiX5XMv1M5COyrTF9RrikZ7WW9F1C8tslM3ZFUdge//atiJHT9utUbLgBXXWY3kpKw6P9aEL93p1Hs5QELuUtg8cjBM/xiSgifdNC+rpR3bJbq6WaA5EzOM/kRWtUaX40lO/C4swiMMsclFUMygpXrgPp9L1YvFCwuOJZwYr82PsUrOXVRZ5JnvQoINNfnAISd2hq17klKtdl148vCOf1WB4MexvMtq0pA454FJw2ngG94kLT8NOSDaE799qORLObgfEsQPMhd/DlXmci/OGI58ljDgvf5YEf4RhQXsawlkCoJkQgCGmN6DLFarZxgLNjPLHMtNwn/Koyyi6e6JCOxgvYcPnFp6xIGcbgtVS+PgmtqZGkODQj/fi1U+tmj3ePdOz2yE+WCxgme7w1EYXmAJIVcIzAtUr4Chx/mn967gz56LWjkml5kI4YEiisk41/0tT+UttFXh9UKy8oacDxqZ3eoDmHFZ3/sHy52pmA/+dxvrRPRJJ4JbpAczU=) 2025-09-11 00:22:35.182842 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBABJ5cvcMpYZ1wBIm4B1EConHat98dyh/yC4ntJoC+oBSdo+YWMl3Mhtl4tS92eq5btUyly2RZ0pMUT9lCJIKrw=) 2025-09-11 00:22:35.182855 | orchestrator | 2025-09-11 00:22:35.182866 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-11 00:22:35.182876 | orchestrator | Thursday 11 September 2025 00:22:32 +0000 (0:00:00.983) 0:00:25.138 **** 2025-09-11 00:22:35.182887 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDjp7vEua6dyUZM9v0slEeLKmgfWNRusuGqXb8wgNO2Q7pj9K2m8zzLRgt0mv+roM576TR82sSWyxIRX2UXvc1NJqFV/Fq4tGH2jZLlwRkyNmlfQ3SSs67notm0noOFP0F4jp3PEnHCwjDXHrL54SxSa4zmz0P2abaODo/goOv7Ce/SeABW5TlMbMeddMoAODq35EwF6h1pfZ6ZSnxQC7eNqH4xJry4tqRwuDpX2sK2f3djixbFeLMRRUT3hDs5KIOG4Abfmumk+oorxoqFWKw5fu6SvghSso+0WR854fIXYGo3gjDaDG9Mc/bjObyM81eTeYiocLTrDmKktbXJ7gDlniipUcCoB8Kqa99P+DYTBpODv1ckRTLIsf+5b2lNOfOk6nYioi12HNKJfujAmNKElEXcpZkuqN+X6CVrl/lTBWLHapbmujA/3DS/bwWkPUSNEY1/hj2osh449fsPD1AJB1YkPeZ4jkmNY46Vi9PSQYx+MEmcpgHfK7I9GwNWN8M=) 2025-09-11 00:22:35.182925 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIMF4BWYotaWadAii2m9k2a/j2nNXCQJ5BRdEiG2hbTusi8WcuEn7ba/2UQP6Cpw8+66sa9rXvLUS4OQenzK4Oo=) 2025-09-11 00:22:35.182937 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFQh+apviBNOMe+rRD4SyUATe5sPfSVhkfyzIIHUc3NJ) 2025-09-11 00:22:35.182947 | orchestrator | 2025-09-11 00:22:35.182958 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-11 00:22:35.182969 | orchestrator | Thursday 11 September 2025 00:22:33 +0000 (0:00:00.969) 0:00:26.108 **** 2025-09-11 00:22:35.182980 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBRsl9QGYroNrxw4pCNbVuE2clC8/La+CaqfM2dUx3PvqVGhutnuHVJlXaNx1ISQYBJ1P2wlG5N9Fbv1weIRqMlBpJo/PMGu+2rREM04c5W3n8nqd9AutJeOPyyZ+PZZCZRUaeGmCLgjlRZmhIadNOnOgnh5MeQ+xkT74TwUkBr20OAA2rxkO4C7lhBg2RPgucyNRirXuDTQ05M5QQCFU+F7twNAMjRxaOHIyoYp+FBJnkD3x+ZcZsJDxb6LsbZjk1QPjAuQJ9dugBExS9p7uNymQVPikBV92az/x4aDLngKGmzef09mSVcnEYx6NrSYNRwNsZlX0Ns38/wob4GYor+ubz4I+GIBmrGH7LoiPW89oWo55HmXCy02W2r+3CvR18PEuLDBJPAfOzOJN/t7eO+jfBZq7ZcFp5t1OaUgi1gK3c41tM89jD5AeyIE8PsNgaAb150fTmrXcLt/mnYbnFVVkkdwgmetn+gdfwnE3npEk7nOw/F52ePwpM75kp9S0=) 2025-09-11 00:22:35.182991 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG+aZeFhNXexXYJlC6/4cIHALjKTX6+nSnF+6VaKlZwD20FvRh/1ZxvbXOHV4QZERZUjfeo+PVZKd4tP/QMivSU=) 2025-09-11 00:22:35.183002 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILAaSjUYMNvHa4dCreDKIz2WWs9pJHTkMPLu34bXEbq1) 2025-09-11 00:22:35.183013 | orchestrator | 2025-09-11 00:22:35.183024 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-09-11 00:22:35.183035 | orchestrator | Thursday 11 September 2025 00:22:34 +0000 (0:00:01.022) 0:00:27.131 **** 2025-09-11 00:22:35.183046 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-11 00:22:35.183057 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-11 00:22:35.183068 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-11 00:22:35.183078 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-11 00:22:35.183089 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-11 00:22:35.183099 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-11 00:22:35.183110 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-11 00:22:35.183122 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:22:35.183134 | orchestrator | 2025-09-11 00:22:35.183162 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-09-11 00:22:35.183173 | orchestrator | Thursday 11 September 2025 00:22:34 +0000 (0:00:00.153) 0:00:27.284 **** 2025-09-11 00:22:35.183184 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:22:35.183194 | orchestrator | 2025-09-11 00:22:35.183205 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-09-11 00:22:35.183216 | orchestrator | Thursday 11 September 2025 00:22:34 +0000 (0:00:00.047) 0:00:27.331 **** 2025-09-11 00:22:35.183227 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:22:35.183237 | orchestrator | 2025-09-11 00:22:35.183248 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-09-11 00:22:35.183258 | orchestrator | Thursday 11 September 2025 00:22:34 +0000 (0:00:00.039) 0:00:27.370 **** 2025-09-11 00:22:35.183277 | orchestrator | changed: [testbed-manager] 2025-09-11 00:22:35.183287 | orchestrator | 2025-09-11 00:22:35.183298 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:22:35.183309 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-11 00:22:35.183320 | orchestrator | 2025-09-11 00:22:35.183331 | orchestrator | 2025-09-11 00:22:35.183342 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:22:35.183352 | orchestrator | Thursday 11 September 2025 00:22:34 +0000 (0:00:00.500) 0:00:27.870 **** 2025-09-11 00:22:35.183363 | orchestrator | =============================================================================== 2025-09-11 00:22:35.183374 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.50s 2025-09-11 00:22:35.183384 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.15s 2025-09-11 00:22:35.183435 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 2.03s 2025-09-11 00:22:35.183448 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.97s 2025-09-11 00:22:35.183458 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-09-11 00:22:35.183469 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-09-11 00:22:35.183479 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-09-11 00:22:35.183490 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-09-11 00:22:35.183501 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-09-11 00:22:35.183511 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-09-11 00:22:35.183522 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-09-11 00:22:35.183532 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-09-11 00:22:35.183543 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2025-09-11 00:22:35.183554 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2025-09-11 00:22:35.183564 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2025-09-11 00:22:35.183575 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.97s 2025-09-11 00:22:35.183585 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.50s 2025-09-11 00:22:35.183596 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2025-09-11 00:22:35.183607 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.15s 2025-09-11 00:22:35.183623 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.14s 2025-09-11 00:22:35.428918 | orchestrator | + osism apply squid 2025-09-11 00:22:47.477072 | orchestrator | 2025-09-11 00:22:47 | INFO  | Task d843239b-0fd6-441b-9f55-f8f2d921c0a6 (squid) was prepared for execution. 2025-09-11 00:22:47.477169 | orchestrator | 2025-09-11 00:22:47 | INFO  | It takes a moment until task d843239b-0fd6-441b-9f55-f8f2d921c0a6 (squid) has been started and output is visible here. 2025-09-11 00:24:40.028183 | orchestrator | 2025-09-11 00:24:40.028292 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-09-11 00:24:40.028308 | orchestrator | 2025-09-11 00:24:40.028320 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-09-11 00:24:40.028330 | orchestrator | Thursday 11 September 2025 00:22:51 +0000 (0:00:00.121) 0:00:00.121 **** 2025-09-11 00:24:40.028341 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-09-11 00:24:40.028351 | orchestrator | 2025-09-11 00:24:40.028361 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-09-11 00:24:40.028455 | orchestrator | Thursday 11 September 2025 00:22:51 +0000 (0:00:00.068) 0:00:00.189 **** 2025-09-11 00:24:40.028468 | orchestrator | ok: [testbed-manager] 2025-09-11 00:24:40.028479 | orchestrator | 2025-09-11 00:24:40.028489 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-09-11 00:24:40.028499 | orchestrator | Thursday 11 September 2025 00:22:52 +0000 (0:00:01.072) 0:00:01.261 **** 2025-09-11 00:24:40.028509 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-09-11 00:24:40.028518 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-09-11 00:24:40.028528 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-09-11 00:24:40.028538 | orchestrator | 2025-09-11 00:24:40.028548 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-09-11 00:24:40.028557 | orchestrator | Thursday 11 September 2025 00:22:53 +0000 (0:00:00.994) 0:00:02.256 **** 2025-09-11 00:24:40.028567 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-09-11 00:24:40.028577 | orchestrator | 2025-09-11 00:24:40.028586 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-09-11 00:24:40.028596 | orchestrator | Thursday 11 September 2025 00:22:54 +0000 (0:00:00.985) 0:00:03.242 **** 2025-09-11 00:24:40.028605 | orchestrator | ok: [testbed-manager] 2025-09-11 00:24:40.028615 | orchestrator | 2025-09-11 00:24:40.028625 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-09-11 00:24:40.028634 | orchestrator | Thursday 11 September 2025 00:22:54 +0000 (0:00:00.333) 0:00:03.575 **** 2025-09-11 00:24:40.028644 | orchestrator | changed: [testbed-manager] 2025-09-11 00:24:40.028654 | orchestrator | 2025-09-11 00:24:40.028663 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-09-11 00:24:40.028673 | orchestrator | Thursday 11 September 2025 00:22:55 +0000 (0:00:00.838) 0:00:04.414 **** 2025-09-11 00:24:40.028682 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-09-11 00:24:40.028692 | orchestrator | ok: [testbed-manager] 2025-09-11 00:24:40.028702 | orchestrator | 2025-09-11 00:24:40.028711 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-09-11 00:24:40.028721 | orchestrator | Thursday 11 September 2025 00:23:26 +0000 (0:00:31.636) 0:00:36.051 **** 2025-09-11 00:24:40.028732 | orchestrator | changed: [testbed-manager] 2025-09-11 00:24:40.028744 | orchestrator | 2025-09-11 00:24:40.028755 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-09-11 00:24:40.028767 | orchestrator | Thursday 11 September 2025 00:23:39 +0000 (0:00:12.084) 0:00:48.136 **** 2025-09-11 00:24:40.028779 | orchestrator | Pausing for 60 seconds 2025-09-11 00:24:40.028790 | orchestrator | changed: [testbed-manager] 2025-09-11 00:24:40.028800 | orchestrator | 2025-09-11 00:24:40.028810 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-09-11 00:24:40.028819 | orchestrator | Thursday 11 September 2025 00:24:39 +0000 (0:01:00.067) 0:01:48.204 **** 2025-09-11 00:24:40.028829 | orchestrator | ok: [testbed-manager] 2025-09-11 00:24:40.028838 | orchestrator | 2025-09-11 00:24:40.028848 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-09-11 00:24:40.028857 | orchestrator | Thursday 11 September 2025 00:24:39 +0000 (0:00:00.055) 0:01:48.259 **** 2025-09-11 00:24:40.028867 | orchestrator | changed: [testbed-manager] 2025-09-11 00:24:40.028876 | orchestrator | 2025-09-11 00:24:40.028886 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:24:40.028896 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:24:40.028905 | orchestrator | 2025-09-11 00:24:40.028915 | orchestrator | 2025-09-11 00:24:40.028925 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:24:40.028934 | orchestrator | Thursday 11 September 2025 00:24:39 +0000 (0:00:00.610) 0:01:48.869 **** 2025-09-11 00:24:40.028951 | orchestrator | =============================================================================== 2025-09-11 00:24:40.028960 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2025-09-11 00:24:40.028970 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.64s 2025-09-11 00:24:40.028979 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.08s 2025-09-11 00:24:40.028988 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.07s 2025-09-11 00:24:40.028998 | orchestrator | osism.services.squid : Create required directories ---------------------- 0.99s 2025-09-11 00:24:40.029007 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.99s 2025-09-11 00:24:40.029017 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.84s 2025-09-11 00:24:40.029027 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.61s 2025-09-11 00:24:40.029036 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.33s 2025-09-11 00:24:40.029046 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.07s 2025-09-11 00:24:40.029056 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2025-09-11 00:24:40.334934 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-11 00:24:40.335443 | orchestrator | ++ semver latest 9.0.0 2025-09-11 00:24:40.376517 | orchestrator | + [[ -1 -lt 0 ]] 2025-09-11 00:24:40.376588 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-11 00:24:40.376896 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-09-11 00:24:52.421339 | orchestrator | 2025-09-11 00:24:52 | INFO  | Task 089292a3-410c-4b8b-9ae3-1acd5889cf21 (operator) was prepared for execution. 2025-09-11 00:24:52.421499 | orchestrator | 2025-09-11 00:24:52 | INFO  | It takes a moment until task 089292a3-410c-4b8b-9ae3-1acd5889cf21 (operator) has been started and output is visible here. 2025-09-11 00:25:08.226606 | orchestrator | 2025-09-11 00:25:08.226715 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-09-11 00:25:08.226731 | orchestrator | 2025-09-11 00:25:08.226742 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-11 00:25:08.226752 | orchestrator | Thursday 11 September 2025 00:24:56 +0000 (0:00:00.147) 0:00:00.147 **** 2025-09-11 00:25:08.226776 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:25:08.226788 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:25:08.226798 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:25:08.226807 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:25:08.226817 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:25:08.226826 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:25:08.226836 | orchestrator | 2025-09-11 00:25:08.226845 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-09-11 00:25:08.226855 | orchestrator | Thursday 11 September 2025 00:24:59 +0000 (0:00:03.743) 0:00:03.891 **** 2025-09-11 00:25:08.226865 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:25:08.226874 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:25:08.226885 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:25:08.226895 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:25:08.226904 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:25:08.226913 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:25:08.226923 | orchestrator | 2025-09-11 00:25:08.226933 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-09-11 00:25:08.226942 | orchestrator | 2025-09-11 00:25:08.226952 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-11 00:25:08.226962 | orchestrator | Thursday 11 September 2025 00:25:00 +0000 (0:00:00.769) 0:00:04.661 **** 2025-09-11 00:25:08.226971 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:25:08.226981 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:25:08.226990 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:25:08.227000 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:25:08.227009 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:25:08.227019 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:25:08.227048 | orchestrator | 2025-09-11 00:25:08.227058 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-11 00:25:08.227067 | orchestrator | Thursday 11 September 2025 00:25:00 +0000 (0:00:00.147) 0:00:04.808 **** 2025-09-11 00:25:08.227077 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:25:08.227086 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:25:08.227096 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:25:08.227105 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:25:08.227115 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:25:08.227124 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:25:08.227134 | orchestrator | 2025-09-11 00:25:08.227143 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-11 00:25:08.227153 | orchestrator | Thursday 11 September 2025 00:25:01 +0000 (0:00:00.158) 0:00:04.967 **** 2025-09-11 00:25:08.227165 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:25:08.227177 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:25:08.227188 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:25:08.227199 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:25:08.227210 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:25:08.227221 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:25:08.227233 | orchestrator | 2025-09-11 00:25:08.227244 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-11 00:25:08.227256 | orchestrator | Thursday 11 September 2025 00:25:01 +0000 (0:00:00.665) 0:00:05.632 **** 2025-09-11 00:25:08.227267 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:25:08.227278 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:25:08.227289 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:25:08.227301 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:25:08.227312 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:25:08.227323 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:25:08.227334 | orchestrator | 2025-09-11 00:25:08.227345 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-11 00:25:08.227356 | orchestrator | Thursday 11 September 2025 00:25:02 +0000 (0:00:00.789) 0:00:06.422 **** 2025-09-11 00:25:08.227368 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-09-11 00:25:08.227378 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-09-11 00:25:08.227412 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-09-11 00:25:08.227424 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-09-11 00:25:08.227436 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-09-11 00:25:08.227447 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-09-11 00:25:08.227458 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-09-11 00:25:08.227470 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-09-11 00:25:08.227480 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-09-11 00:25:08.227491 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-09-11 00:25:08.227503 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-09-11 00:25:08.227514 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-09-11 00:25:08.227525 | orchestrator | 2025-09-11 00:25:08.227541 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-11 00:25:08.227553 | orchestrator | Thursday 11 September 2025 00:25:03 +0000 (0:00:01.140) 0:00:07.563 **** 2025-09-11 00:25:08.227567 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:25:08.227577 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:25:08.227586 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:25:08.227595 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:25:08.227605 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:25:08.227614 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:25:08.227623 | orchestrator | 2025-09-11 00:25:08.227633 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-11 00:25:08.227643 | orchestrator | Thursday 11 September 2025 00:25:04 +0000 (0:00:01.223) 0:00:08.786 **** 2025-09-11 00:25:08.227652 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-09-11 00:25:08.227668 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-09-11 00:25:08.227678 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-09-11 00:25:08.227687 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-09-11 00:25:08.227712 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-09-11 00:25:08.227723 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-09-11 00:25:08.227732 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-09-11 00:25:08.227742 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-09-11 00:25:08.227751 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-09-11 00:25:08.227761 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-09-11 00:25:08.227770 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-09-11 00:25:08.227780 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-09-11 00:25:08.227789 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-09-11 00:25:08.227798 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-09-11 00:25:08.227808 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-09-11 00:25:08.227817 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-09-11 00:25:08.227827 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-09-11 00:25:08.227836 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-09-11 00:25:08.227846 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-09-11 00:25:08.227855 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-09-11 00:25:08.227869 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-09-11 00:25:08.227882 | orchestrator | 2025-09-11 00:25:08.227892 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-11 00:25:08.227902 | orchestrator | Thursday 11 September 2025 00:25:06 +0000 (0:00:01.299) 0:00:10.085 **** 2025-09-11 00:25:08.227911 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:25:08.227921 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:25:08.227930 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:25:08.227940 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:25:08.227949 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:25:08.227962 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:25:08.227976 | orchestrator | 2025-09-11 00:25:08.227985 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-11 00:25:08.227995 | orchestrator | Thursday 11 September 2025 00:25:06 +0000 (0:00:00.157) 0:00:10.243 **** 2025-09-11 00:25:08.228004 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:25:08.228013 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:25:08.228023 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:25:08.228032 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:25:08.228042 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:25:08.228051 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:25:08.228061 | orchestrator | 2025-09-11 00:25:08.228070 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-11 00:25:08.228080 | orchestrator | Thursday 11 September 2025 00:25:06 +0000 (0:00:00.562) 0:00:10.805 **** 2025-09-11 00:25:08.228089 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:25:08.228099 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:25:08.228108 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:25:08.228117 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:25:08.228127 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:25:08.228136 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:25:08.228146 | orchestrator | 2025-09-11 00:25:08.228155 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-11 00:25:08.228170 | orchestrator | Thursday 11 September 2025 00:25:07 +0000 (0:00:00.144) 0:00:10.950 **** 2025-09-11 00:25:08.228180 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-11 00:25:08.228193 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-11 00:25:08.228203 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:25:08.228213 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-11 00:25:08.228222 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-11 00:25:08.228231 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:25:08.228241 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:25:08.228250 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:25:08.228260 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-11 00:25:08.228269 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:25:08.228278 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-11 00:25:08.228288 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:25:08.228297 | orchestrator | 2025-09-11 00:25:08.228307 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-11 00:25:08.228316 | orchestrator | Thursday 11 September 2025 00:25:07 +0000 (0:00:00.724) 0:00:11.674 **** 2025-09-11 00:25:08.228326 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:25:08.228335 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:25:08.228344 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:25:08.228354 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:25:08.228363 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:25:08.228373 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:25:08.228397 | orchestrator | 2025-09-11 00:25:08.228407 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-11 00:25:08.228422 | orchestrator | Thursday 11 September 2025 00:25:07 +0000 (0:00:00.155) 0:00:11.830 **** 2025-09-11 00:25:08.228431 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:25:08.228441 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:25:08.228450 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:25:08.228460 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:25:08.228469 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:25:08.228482 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:25:08.228500 | orchestrator | 2025-09-11 00:25:08.228517 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-11 00:25:08.228534 | orchestrator | Thursday 11 September 2025 00:25:08 +0000 (0:00:00.156) 0:00:11.987 **** 2025-09-11 00:25:08.228553 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:25:08.228570 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:25:08.228587 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:25:08.228601 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:25:08.228619 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:25:09.273739 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:25:09.273825 | orchestrator | 2025-09-11 00:25:09.273841 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-11 00:25:09.273854 | orchestrator | Thursday 11 September 2025 00:25:08 +0000 (0:00:00.126) 0:00:12.113 **** 2025-09-11 00:25:09.273866 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:25:09.273877 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:25:09.273888 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:25:09.273899 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:25:09.273910 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:25:09.273920 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:25:09.273932 | orchestrator | 2025-09-11 00:25:09.273943 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-11 00:25:09.273954 | orchestrator | Thursday 11 September 2025 00:25:08 +0000 (0:00:00.731) 0:00:12.845 **** 2025-09-11 00:25:09.273965 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:25:09.273976 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:25:09.273986 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:25:09.274079 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:25:09.274095 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:25:09.274106 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:25:09.274117 | orchestrator | 2025-09-11 00:25:09.274127 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:25:09.274139 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-11 00:25:09.274151 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-11 00:25:09.274162 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-11 00:25:09.274173 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-11 00:25:09.274183 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-11 00:25:09.274194 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-11 00:25:09.274205 | orchestrator | 2025-09-11 00:25:09.274216 | orchestrator | 2025-09-11 00:25:09.274226 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:25:09.274237 | orchestrator | Thursday 11 September 2025 00:25:09 +0000 (0:00:00.176) 0:00:13.021 **** 2025-09-11 00:25:09.274248 | orchestrator | =============================================================================== 2025-09-11 00:25:09.274259 | orchestrator | Gathering Facts --------------------------------------------------------- 3.74s 2025-09-11 00:25:09.274269 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.30s 2025-09-11 00:25:09.274281 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.22s 2025-09-11 00:25:09.274291 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.14s 2025-09-11 00:25:09.274304 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.79s 2025-09-11 00:25:09.274317 | orchestrator | Do not require tty for all users ---------------------------------------- 0.77s 2025-09-11 00:25:09.274329 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.73s 2025-09-11 00:25:09.274342 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.72s 2025-09-11 00:25:09.274354 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.67s 2025-09-11 00:25:09.274367 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.56s 2025-09-11 00:25:09.274409 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.18s 2025-09-11 00:25:09.274423 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.16s 2025-09-11 00:25:09.274435 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.16s 2025-09-11 00:25:09.274447 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2025-09-11 00:25:09.274470 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.16s 2025-09-11 00:25:09.274481 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.15s 2025-09-11 00:25:09.274492 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.14s 2025-09-11 00:25:09.274502 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.13s 2025-09-11 00:25:09.446001 | orchestrator | + osism apply --environment custom facts 2025-09-11 00:25:11.081911 | orchestrator | 2025-09-11 00:25:11 | INFO  | Trying to run play facts in environment custom 2025-09-11 00:25:21.170743 | orchestrator | 2025-09-11 00:25:21 | INFO  | Task a943397b-512e-4cf7-903a-94354c36949a (facts) was prepared for execution. 2025-09-11 00:25:21.170861 | orchestrator | 2025-09-11 00:25:21 | INFO  | It takes a moment until task a943397b-512e-4cf7-903a-94354c36949a (facts) has been started and output is visible here. 2025-09-11 00:26:06.601248 | orchestrator | 2025-09-11 00:26:06.601366 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-09-11 00:26:06.601440 | orchestrator | 2025-09-11 00:26:06.601453 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-11 00:26:06.601465 | orchestrator | Thursday 11 September 2025 00:25:24 +0000 (0:00:00.083) 0:00:00.083 **** 2025-09-11 00:26:06.601476 | orchestrator | ok: [testbed-manager] 2025-09-11 00:26:06.601488 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:26:06.601500 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:26:06.601511 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:26:06.601522 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:26:06.601533 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:26:06.601544 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:26:06.601555 | orchestrator | 2025-09-11 00:26:06.601566 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-09-11 00:26:06.601577 | orchestrator | Thursday 11 September 2025 00:25:26 +0000 (0:00:01.403) 0:00:01.487 **** 2025-09-11 00:26:06.601588 | orchestrator | ok: [testbed-manager] 2025-09-11 00:26:06.601599 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:26:06.601610 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:26:06.601621 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:26:06.601631 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:26:06.601642 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:26:06.601653 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:26:06.601664 | orchestrator | 2025-09-11 00:26:06.601675 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-09-11 00:26:06.601686 | orchestrator | 2025-09-11 00:26:06.601697 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-11 00:26:06.601709 | orchestrator | Thursday 11 September 2025 00:25:27 +0000 (0:00:01.220) 0:00:02.707 **** 2025-09-11 00:26:06.601720 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:26:06.601731 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:26:06.601741 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:26:06.601753 | orchestrator | 2025-09-11 00:26:06.601766 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-11 00:26:06.601780 | orchestrator | Thursday 11 September 2025 00:25:27 +0000 (0:00:00.116) 0:00:02.823 **** 2025-09-11 00:26:06.601792 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:26:06.601805 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:26:06.601817 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:26:06.601830 | orchestrator | 2025-09-11 00:26:06.601843 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-11 00:26:06.601856 | orchestrator | Thursday 11 September 2025 00:25:27 +0000 (0:00:00.211) 0:00:03.035 **** 2025-09-11 00:26:06.601868 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:26:06.601881 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:26:06.601894 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:26:06.601906 | orchestrator | 2025-09-11 00:26:06.601919 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-11 00:26:06.601932 | orchestrator | Thursday 11 September 2025 00:25:28 +0000 (0:00:00.201) 0:00:03.237 **** 2025-09-11 00:26:06.601946 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:26:06.601960 | orchestrator | 2025-09-11 00:26:06.601973 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-11 00:26:06.601986 | orchestrator | Thursday 11 September 2025 00:25:28 +0000 (0:00:00.161) 0:00:03.398 **** 2025-09-11 00:26:06.602081 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:26:06.602097 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:26:06.602109 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:26:06.602120 | orchestrator | 2025-09-11 00:26:06.602142 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-11 00:26:06.602152 | orchestrator | Thursday 11 September 2025 00:25:28 +0000 (0:00:00.512) 0:00:03.911 **** 2025-09-11 00:26:06.602163 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:26:06.602174 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:26:06.602185 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:26:06.602196 | orchestrator | 2025-09-11 00:26:06.602207 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-11 00:26:06.602217 | orchestrator | Thursday 11 September 2025 00:25:28 +0000 (0:00:00.103) 0:00:04.015 **** 2025-09-11 00:26:06.602228 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:26:06.602239 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:26:06.602250 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:26:06.602260 | orchestrator | 2025-09-11 00:26:06.602271 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-11 00:26:06.602282 | orchestrator | Thursday 11 September 2025 00:25:30 +0000 (0:00:01.111) 0:00:05.126 **** 2025-09-11 00:26:06.602293 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:26:06.602303 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:26:06.602314 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:26:06.602325 | orchestrator | 2025-09-11 00:26:06.602335 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-11 00:26:06.602347 | orchestrator | Thursday 11 September 2025 00:25:30 +0000 (0:00:00.472) 0:00:05.598 **** 2025-09-11 00:26:06.602358 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:26:06.602369 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:26:06.602399 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:26:06.602410 | orchestrator | 2025-09-11 00:26:06.602420 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-11 00:26:06.602431 | orchestrator | Thursday 11 September 2025 00:25:31 +0000 (0:00:01.121) 0:00:06.720 **** 2025-09-11 00:26:06.602460 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:26:06.602472 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:26:06.602482 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:26:06.602493 | orchestrator | 2025-09-11 00:26:06.602504 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-09-11 00:26:06.602515 | orchestrator | Thursday 11 September 2025 00:25:49 +0000 (0:00:18.238) 0:00:24.958 **** 2025-09-11 00:26:06.602525 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:26:06.602536 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:26:06.602547 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:26:06.602558 | orchestrator | 2025-09-11 00:26:06.602568 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-09-11 00:26:06.602598 | orchestrator | Thursday 11 September 2025 00:25:49 +0000 (0:00:00.117) 0:00:25.076 **** 2025-09-11 00:26:06.602609 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:26:06.602620 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:26:06.602631 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:26:06.602642 | orchestrator | 2025-09-11 00:26:06.602653 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-11 00:26:06.602663 | orchestrator | Thursday 11 September 2025 00:25:57 +0000 (0:00:07.448) 0:00:32.525 **** 2025-09-11 00:26:06.602674 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:26:06.602685 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:26:06.602696 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:26:06.602706 | orchestrator | 2025-09-11 00:26:06.602717 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-11 00:26:06.602728 | orchestrator | Thursday 11 September 2025 00:25:57 +0000 (0:00:00.469) 0:00:32.994 **** 2025-09-11 00:26:06.602738 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-09-11 00:26:06.602749 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-09-11 00:26:06.602767 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-09-11 00:26:06.602777 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-09-11 00:26:06.602788 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-09-11 00:26:06.602798 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-09-11 00:26:06.602809 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-09-11 00:26:06.602820 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-09-11 00:26:06.602831 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-09-11 00:26:06.602841 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-09-11 00:26:06.602852 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-09-11 00:26:06.602863 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-09-11 00:26:06.602874 | orchestrator | 2025-09-11 00:26:06.602884 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-11 00:26:06.602895 | orchestrator | Thursday 11 September 2025 00:26:01 +0000 (0:00:03.485) 0:00:36.480 **** 2025-09-11 00:26:06.602906 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:26:06.602917 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:26:06.602927 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:26:06.602938 | orchestrator | 2025-09-11 00:26:06.602949 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-11 00:26:06.602959 | orchestrator | 2025-09-11 00:26:06.602970 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-11 00:26:06.602981 | orchestrator | Thursday 11 September 2025 00:26:02 +0000 (0:00:01.226) 0:00:37.707 **** 2025-09-11 00:26:06.602992 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:26:06.603003 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:26:06.603013 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:26:06.603024 | orchestrator | ok: [testbed-manager] 2025-09-11 00:26:06.603035 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:26:06.603046 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:26:06.603056 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:26:06.603067 | orchestrator | 2025-09-11 00:26:06.603077 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:26:06.603089 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:26:06.603100 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:26:06.603113 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:26:06.603124 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:26:06.603135 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-11 00:26:06.603146 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-11 00:26:06.603162 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-11 00:26:06.603173 | orchestrator | 2025-09-11 00:26:06.603184 | orchestrator | 2025-09-11 00:26:06.603195 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:26:06.603206 | orchestrator | Thursday 11 September 2025 00:26:06 +0000 (0:00:03.990) 0:00:41.698 **** 2025-09-11 00:26:06.603217 | orchestrator | =============================================================================== 2025-09-11 00:26:06.603234 | orchestrator | osism.commons.repository : Update package cache ------------------------ 18.24s 2025-09-11 00:26:06.603245 | orchestrator | Install required packages (Debian) -------------------------------------- 7.45s 2025-09-11 00:26:06.603255 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.99s 2025-09-11 00:26:06.603266 | orchestrator | Copy fact files --------------------------------------------------------- 3.49s 2025-09-11 00:26:06.603277 | orchestrator | Create custom facts directory ------------------------------------------- 1.40s 2025-09-11 00:26:06.603287 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.23s 2025-09-11 00:26:06.603304 | orchestrator | Copy fact file ---------------------------------------------------------- 1.22s 2025-09-11 00:26:06.786654 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.12s 2025-09-11 00:26:06.786739 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.11s 2025-09-11 00:26:06.786753 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.51s 2025-09-11 00:26:06.786764 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.47s 2025-09-11 00:26:06.786775 | orchestrator | Create custom facts directory ------------------------------------------- 0.47s 2025-09-11 00:26:06.786786 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2025-09-11 00:26:06.786797 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.20s 2025-09-11 00:26:06.786808 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.16s 2025-09-11 00:26:06.786819 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.12s 2025-09-11 00:26:06.786830 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2025-09-11 00:26:06.786840 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.10s 2025-09-11 00:26:07.028777 | orchestrator | + osism apply bootstrap 2025-09-11 00:26:18.912259 | orchestrator | 2025-09-11 00:26:18 | INFO  | Task 339456bd-434b-4b1c-b3c6-4bc9f1b2738a (bootstrap) was prepared for execution. 2025-09-11 00:26:18.912421 | orchestrator | 2025-09-11 00:26:18 | INFO  | It takes a moment until task 339456bd-434b-4b1c-b3c6-4bc9f1b2738a (bootstrap) has been started and output is visible here. 2025-09-11 00:26:35.416476 | orchestrator | 2025-09-11 00:26:35.416586 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-09-11 00:26:35.416600 | orchestrator | 2025-09-11 00:26:35.416611 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-09-11 00:26:35.416621 | orchestrator | Thursday 11 September 2025 00:26:22 +0000 (0:00:00.167) 0:00:00.167 **** 2025-09-11 00:26:35.416631 | orchestrator | ok: [testbed-manager] 2025-09-11 00:26:35.416642 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:26:35.416652 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:26:35.416661 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:26:35.416671 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:26:35.416680 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:26:35.416689 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:26:35.416699 | orchestrator | 2025-09-11 00:26:35.416708 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-11 00:26:35.416718 | orchestrator | 2025-09-11 00:26:35.416727 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-11 00:26:35.416737 | orchestrator | Thursday 11 September 2025 00:26:23 +0000 (0:00:00.235) 0:00:00.402 **** 2025-09-11 00:26:35.416746 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:26:35.416755 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:26:35.416765 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:26:35.416774 | orchestrator | ok: [testbed-manager] 2025-09-11 00:26:35.416783 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:26:35.416793 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:26:35.416802 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:26:35.416832 | orchestrator | 2025-09-11 00:26:35.416843 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-09-11 00:26:35.416852 | orchestrator | 2025-09-11 00:26:35.416861 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-11 00:26:35.416871 | orchestrator | Thursday 11 September 2025 00:26:26 +0000 (0:00:03.556) 0:00:03.958 **** 2025-09-11 00:26:35.416880 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-11 00:26:35.416890 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-11 00:26:35.416899 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-09-11 00:26:35.416909 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-11 00:26:35.416918 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-09-11 00:26:35.416927 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-11 00:26:35.416937 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-11 00:26:35.416946 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-11 00:26:35.416955 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-09-11 00:26:35.416965 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-11 00:26:35.416977 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-09-11 00:26:35.416988 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-09-11 00:26:35.417000 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-11 00:26:35.417010 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-11 00:26:35.417021 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-11 00:26:35.417031 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-09-11 00:26:35.417043 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-09-11 00:26:35.417053 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-11 00:26:35.417064 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-09-11 00:26:35.417075 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-11 00:26:35.417086 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:26:35.417096 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-09-11 00:26:35.417107 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-11 00:26:35.417118 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:26:35.417129 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-11 00:26:35.417140 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-11 00:26:35.417150 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-09-11 00:26:35.417161 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-09-11 00:26:35.417171 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-09-11 00:26:35.417182 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-11 00:26:35.417193 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-11 00:26:35.417203 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-09-11 00:26:35.417214 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-09-11 00:26:35.417225 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-11 00:26:35.417236 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:26:35.417246 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-11 00:26:35.417271 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-09-11 00:26:35.417282 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-09-11 00:26:35.417293 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-11 00:26:35.417304 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-11 00:26:35.417314 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-09-11 00:26:35.417332 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-11 00:26:35.417342 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-09-11 00:26:35.417352 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-11 00:26:35.417361 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:26:35.417391 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-11 00:26:35.417418 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-11 00:26:35.417429 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-11 00:26:35.417438 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-11 00:26:35.417447 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-11 00:26:35.417456 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-11 00:26:35.417466 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:26:35.417475 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-11 00:26:35.417484 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-11 00:26:35.417494 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:26:35.417503 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:26:35.417512 | orchestrator | 2025-09-11 00:26:35.417521 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-09-11 00:26:35.417531 | orchestrator | 2025-09-11 00:26:35.417540 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-09-11 00:26:35.417549 | orchestrator | Thursday 11 September 2025 00:26:27 +0000 (0:00:00.398) 0:00:04.357 **** 2025-09-11 00:26:35.417559 | orchestrator | ok: [testbed-manager] 2025-09-11 00:26:35.417568 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:26:35.417577 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:26:35.417586 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:26:35.417596 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:26:35.417605 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:26:35.417614 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:26:35.417623 | orchestrator | 2025-09-11 00:26:35.417633 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-09-11 00:26:35.417642 | orchestrator | Thursday 11 September 2025 00:26:29 +0000 (0:00:02.214) 0:00:06.572 **** 2025-09-11 00:26:35.417651 | orchestrator | ok: [testbed-manager] 2025-09-11 00:26:35.417661 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:26:35.417670 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:26:35.417679 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:26:35.417688 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:26:35.417697 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:26:35.417707 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:26:35.417716 | orchestrator | 2025-09-11 00:26:35.417725 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-09-11 00:26:35.417734 | orchestrator | Thursday 11 September 2025 00:26:30 +0000 (0:00:01.263) 0:00:07.835 **** 2025-09-11 00:26:35.417745 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:26:35.417757 | orchestrator | 2025-09-11 00:26:35.417767 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-09-11 00:26:35.417776 | orchestrator | Thursday 11 September 2025 00:26:30 +0000 (0:00:00.266) 0:00:08.102 **** 2025-09-11 00:26:35.417785 | orchestrator | changed: [testbed-manager] 2025-09-11 00:26:35.417795 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:26:35.417809 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:26:35.417818 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:26:35.417828 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:26:35.417837 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:26:35.417846 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:26:35.417856 | orchestrator | 2025-09-11 00:26:35.417872 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-09-11 00:26:35.417882 | orchestrator | Thursday 11 September 2025 00:26:32 +0000 (0:00:02.014) 0:00:10.117 **** 2025-09-11 00:26:35.417891 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:26:35.417901 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:26:35.417913 | orchestrator | 2025-09-11 00:26:35.417922 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-09-11 00:26:35.417932 | orchestrator | Thursday 11 September 2025 00:26:33 +0000 (0:00:00.252) 0:00:10.369 **** 2025-09-11 00:26:35.417941 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:26:35.417950 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:26:35.417960 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:26:35.417969 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:26:35.417978 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:26:35.417987 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:26:35.417997 | orchestrator | 2025-09-11 00:26:35.418006 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-09-11 00:26:35.418057 | orchestrator | Thursday 11 September 2025 00:26:34 +0000 (0:00:01.084) 0:00:11.454 **** 2025-09-11 00:26:35.418069 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:26:35.418079 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:26:35.418088 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:26:35.418097 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:26:35.418106 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:26:35.418116 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:26:35.418125 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:26:35.418134 | orchestrator | 2025-09-11 00:26:35.418144 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-09-11 00:26:35.418153 | orchestrator | Thursday 11 September 2025 00:26:34 +0000 (0:00:00.636) 0:00:12.090 **** 2025-09-11 00:26:35.418162 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:26:35.418172 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:26:35.418181 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:26:35.418190 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:26:35.418199 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:26:35.418209 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:26:35.418218 | orchestrator | ok: [testbed-manager] 2025-09-11 00:26:35.418227 | orchestrator | 2025-09-11 00:26:35.418237 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-11 00:26:35.418247 | orchestrator | Thursday 11 September 2025 00:26:35 +0000 (0:00:00.410) 0:00:12.501 **** 2025-09-11 00:26:35.418257 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:26:35.418266 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:26:35.418282 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:26:46.399157 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:26:46.399278 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:26:46.399294 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:26:46.399306 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:26:46.399318 | orchestrator | 2025-09-11 00:26:46.399331 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-11 00:26:46.399343 | orchestrator | Thursday 11 September 2025 00:26:35 +0000 (0:00:00.189) 0:00:12.691 **** 2025-09-11 00:26:46.399356 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:26:46.399419 | orchestrator | 2025-09-11 00:26:46.399432 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-11 00:26:46.399444 | orchestrator | Thursday 11 September 2025 00:26:35 +0000 (0:00:00.268) 0:00:12.960 **** 2025-09-11 00:26:46.399479 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:26:46.399491 | orchestrator | 2025-09-11 00:26:46.399502 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-11 00:26:46.399513 | orchestrator | Thursday 11 September 2025 00:26:36 +0000 (0:00:00.303) 0:00:13.263 **** 2025-09-11 00:26:46.399524 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:26:46.399535 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:26:46.399546 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:26:46.399557 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:26:46.399567 | orchestrator | ok: [testbed-manager] 2025-09-11 00:26:46.399578 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:26:46.399589 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:26:46.399599 | orchestrator | 2025-09-11 00:26:46.399610 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-11 00:26:46.399621 | orchestrator | Thursday 11 September 2025 00:26:37 +0000 (0:00:01.159) 0:00:14.422 **** 2025-09-11 00:26:46.399632 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:26:46.399642 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:26:46.399653 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:26:46.399664 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:26:46.399674 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:26:46.399685 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:26:46.399698 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:26:46.399710 | orchestrator | 2025-09-11 00:26:46.399723 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-11 00:26:46.399735 | orchestrator | Thursday 11 September 2025 00:26:37 +0000 (0:00:00.216) 0:00:14.639 **** 2025-09-11 00:26:46.399747 | orchestrator | ok: [testbed-manager] 2025-09-11 00:26:46.399760 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:26:46.399772 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:26:46.399784 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:26:46.399797 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:26:46.399808 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:26:46.399820 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:26:46.399833 | orchestrator | 2025-09-11 00:26:46.399845 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-11 00:26:46.399857 | orchestrator | Thursday 11 September 2025 00:26:37 +0000 (0:00:00.515) 0:00:15.155 **** 2025-09-11 00:26:46.399869 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:26:46.399881 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:26:46.399894 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:26:46.399906 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:26:46.399918 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:26:46.399930 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:26:46.399942 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:26:46.399955 | orchestrator | 2025-09-11 00:26:46.399967 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-11 00:26:46.399980 | orchestrator | Thursday 11 September 2025 00:26:38 +0000 (0:00:00.237) 0:00:15.392 **** 2025-09-11 00:26:46.399993 | orchestrator | ok: [testbed-manager] 2025-09-11 00:26:46.400005 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:26:46.400017 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:26:46.400029 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:26:46.400041 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:26:46.400053 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:26:46.400064 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:26:46.400075 | orchestrator | 2025-09-11 00:26:46.400085 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-11 00:26:46.400096 | orchestrator | Thursday 11 September 2025 00:26:38 +0000 (0:00:00.494) 0:00:15.886 **** 2025-09-11 00:26:46.400107 | orchestrator | ok: [testbed-manager] 2025-09-11 00:26:46.400124 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:26:46.400135 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:26:46.400146 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:26:46.400156 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:26:46.400167 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:26:46.400177 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:26:46.400188 | orchestrator | 2025-09-11 00:26:46.400198 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-11 00:26:46.400209 | orchestrator | Thursday 11 September 2025 00:26:39 +0000 (0:00:01.066) 0:00:16.953 **** 2025-09-11 00:26:46.400220 | orchestrator | ok: [testbed-manager] 2025-09-11 00:26:46.400230 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:26:46.400241 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:26:46.400251 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:26:46.400263 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:26:46.400273 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:26:46.400284 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:26:46.400294 | orchestrator | 2025-09-11 00:26:46.400305 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-11 00:26:46.400316 | orchestrator | Thursday 11 September 2025 00:26:40 +0000 (0:00:01.080) 0:00:18.033 **** 2025-09-11 00:26:46.400344 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:26:46.400356 | orchestrator | 2025-09-11 00:26:46.400367 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-11 00:26:46.400407 | orchestrator | Thursday 11 September 2025 00:26:41 +0000 (0:00:00.389) 0:00:18.423 **** 2025-09-11 00:26:46.400418 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:26:46.400429 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:26:46.400439 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:26:46.400450 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:26:46.400461 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:26:46.400472 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:26:46.400482 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:26:46.400493 | orchestrator | 2025-09-11 00:26:46.400504 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-11 00:26:46.400515 | orchestrator | Thursday 11 September 2025 00:26:42 +0000 (0:00:01.161) 0:00:19.584 **** 2025-09-11 00:26:46.400525 | orchestrator | ok: [testbed-manager] 2025-09-11 00:26:46.400536 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:26:46.400547 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:26:46.400558 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:26:46.400568 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:26:46.400579 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:26:46.400590 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:26:46.400601 | orchestrator | 2025-09-11 00:26:46.400611 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-11 00:26:46.400622 | orchestrator | Thursday 11 September 2025 00:26:42 +0000 (0:00:00.222) 0:00:19.806 **** 2025-09-11 00:26:46.400633 | orchestrator | ok: [testbed-manager] 2025-09-11 00:26:46.400644 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:26:46.400655 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:26:46.400665 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:26:46.400676 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:26:46.400687 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:26:46.400697 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:26:46.400708 | orchestrator | 2025-09-11 00:26:46.400719 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-11 00:26:46.400730 | orchestrator | Thursday 11 September 2025 00:26:42 +0000 (0:00:00.195) 0:00:20.002 **** 2025-09-11 00:26:46.400741 | orchestrator | ok: [testbed-manager] 2025-09-11 00:26:46.400752 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:26:46.400813 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:26:46.400826 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:26:46.400837 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:26:46.400847 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:26:46.400858 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:26:46.400868 | orchestrator | 2025-09-11 00:26:46.400879 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-11 00:26:46.400890 | orchestrator | Thursday 11 September 2025 00:26:42 +0000 (0:00:00.188) 0:00:20.191 **** 2025-09-11 00:26:46.400908 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:26:46.400920 | orchestrator | 2025-09-11 00:26:46.400932 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-11 00:26:46.400942 | orchestrator | Thursday 11 September 2025 00:26:43 +0000 (0:00:00.262) 0:00:20.454 **** 2025-09-11 00:26:46.400953 | orchestrator | ok: [testbed-manager] 2025-09-11 00:26:46.400964 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:26:46.400975 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:26:46.400985 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:26:46.400996 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:26:46.401006 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:26:46.401017 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:26:46.401027 | orchestrator | 2025-09-11 00:26:46.401038 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-11 00:26:46.401049 | orchestrator | Thursday 11 September 2025 00:26:43 +0000 (0:00:00.480) 0:00:20.934 **** 2025-09-11 00:26:46.401060 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:26:46.401070 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:26:46.401081 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:26:46.401092 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:26:46.401102 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:26:46.401113 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:26:46.401124 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:26:46.401134 | orchestrator | 2025-09-11 00:26:46.401145 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-11 00:26:46.401156 | orchestrator | Thursday 11 September 2025 00:26:43 +0000 (0:00:00.218) 0:00:21.153 **** 2025-09-11 00:26:46.401166 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:26:46.401177 | orchestrator | ok: [testbed-manager] 2025-09-11 00:26:46.401188 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:26:46.401198 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:26:46.401209 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:26:46.401219 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:26:46.401230 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:26:46.401241 | orchestrator | 2025-09-11 00:26:46.401251 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-11 00:26:46.401262 | orchestrator | Thursday 11 September 2025 00:26:44 +0000 (0:00:00.957) 0:00:22.110 **** 2025-09-11 00:26:46.401273 | orchestrator | ok: [testbed-manager] 2025-09-11 00:26:46.401284 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:26:46.401295 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:26:46.401305 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:26:46.401316 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:26:46.401327 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:26:46.401337 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:26:46.401348 | orchestrator | 2025-09-11 00:26:46.401359 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-11 00:26:46.401389 | orchestrator | Thursday 11 September 2025 00:26:45 +0000 (0:00:00.496) 0:00:22.607 **** 2025-09-11 00:26:46.401400 | orchestrator | ok: [testbed-manager] 2025-09-11 00:26:46.401411 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:26:46.401422 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:26:46.401432 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:26:46.401458 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:27:27.948480 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:27:27.948601 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:27:27.948617 | orchestrator | 2025-09-11 00:27:27.948630 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-11 00:27:27.948643 | orchestrator | Thursday 11 September 2025 00:26:46 +0000 (0:00:00.991) 0:00:23.598 **** 2025-09-11 00:27:27.948655 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:27:27.948666 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:27:27.948677 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:27:27.948688 | orchestrator | changed: [testbed-manager] 2025-09-11 00:27:27.948699 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:27:27.948710 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:27:27.948721 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:27:27.948732 | orchestrator | 2025-09-11 00:27:27.948743 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-09-11 00:27:27.948754 | orchestrator | Thursday 11 September 2025 00:27:04 +0000 (0:00:17.917) 0:00:41.516 **** 2025-09-11 00:27:27.948765 | orchestrator | ok: [testbed-manager] 2025-09-11 00:27:27.948776 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:27:27.948787 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:27:27.948798 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:27:27.948809 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:27:27.948820 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:27:27.948831 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:27:27.948841 | orchestrator | 2025-09-11 00:27:27.948852 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-09-11 00:27:27.948863 | orchestrator | Thursday 11 September 2025 00:27:04 +0000 (0:00:00.204) 0:00:41.720 **** 2025-09-11 00:27:27.948874 | orchestrator | ok: [testbed-manager] 2025-09-11 00:27:27.948885 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:27:27.948896 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:27:27.948907 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:27:27.948918 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:27:27.948928 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:27:27.948941 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:27:27.948953 | orchestrator | 2025-09-11 00:27:27.948966 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-09-11 00:27:27.948978 | orchestrator | Thursday 11 September 2025 00:27:04 +0000 (0:00:00.196) 0:00:41.916 **** 2025-09-11 00:27:27.948990 | orchestrator | ok: [testbed-manager] 2025-09-11 00:27:27.949003 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:27:27.949016 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:27:27.949028 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:27:27.949040 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:27:27.949053 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:27:27.949065 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:27:27.949078 | orchestrator | 2025-09-11 00:27:27.949091 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-09-11 00:27:27.949103 | orchestrator | Thursday 11 September 2025 00:27:04 +0000 (0:00:00.209) 0:00:42.125 **** 2025-09-11 00:27:27.949136 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:27:27.949151 | orchestrator | 2025-09-11 00:27:27.949164 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-09-11 00:27:27.949177 | orchestrator | Thursday 11 September 2025 00:27:05 +0000 (0:00:00.267) 0:00:42.393 **** 2025-09-11 00:27:27.949189 | orchestrator | ok: [testbed-manager] 2025-09-11 00:27:27.949201 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:27:27.949214 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:27:27.949227 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:27:27.949240 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:27:27.949253 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:27:27.949265 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:27:27.949299 | orchestrator | 2025-09-11 00:27:27.949311 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-09-11 00:27:27.949322 | orchestrator | Thursday 11 September 2025 00:27:06 +0000 (0:00:01.809) 0:00:44.203 **** 2025-09-11 00:27:27.949333 | orchestrator | changed: [testbed-manager] 2025-09-11 00:27:27.949343 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:27:27.949354 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:27:27.949391 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:27:27.949404 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:27:27.949415 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:27:27.949425 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:27:27.949436 | orchestrator | 2025-09-11 00:27:27.949447 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-09-11 00:27:27.949458 | orchestrator | Thursday 11 September 2025 00:27:08 +0000 (0:00:01.113) 0:00:45.317 **** 2025-09-11 00:27:27.949469 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:27:27.949480 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:27:27.949491 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:27:27.949501 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:27:27.949512 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:27:27.949523 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:27:27.949534 | orchestrator | ok: [testbed-manager] 2025-09-11 00:27:27.949544 | orchestrator | 2025-09-11 00:27:27.949555 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-09-11 00:27:27.949566 | orchestrator | Thursday 11 September 2025 00:27:09 +0000 (0:00:01.596) 0:00:46.913 **** 2025-09-11 00:27:27.949578 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:27:27.949591 | orchestrator | 2025-09-11 00:27:27.949602 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-09-11 00:27:27.949613 | orchestrator | Thursday 11 September 2025 00:27:09 +0000 (0:00:00.286) 0:00:47.200 **** 2025-09-11 00:27:27.949624 | orchestrator | changed: [testbed-manager] 2025-09-11 00:27:27.949634 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:27:27.949645 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:27:27.949656 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:27:27.949667 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:27:27.949678 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:27:27.949688 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:27:27.949699 | orchestrator | 2025-09-11 00:27:27.949727 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-09-11 00:27:27.949739 | orchestrator | Thursday 11 September 2025 00:27:11 +0000 (0:00:01.091) 0:00:48.292 **** 2025-09-11 00:27:27.949750 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:27:27.949761 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:27:27.949772 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:27:27.949782 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:27:27.949793 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:27:27.949804 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:27:27.949815 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:27:27.949825 | orchestrator | 2025-09-11 00:27:27.949836 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-09-11 00:27:27.949847 | orchestrator | Thursday 11 September 2025 00:27:11 +0000 (0:00:00.311) 0:00:48.603 **** 2025-09-11 00:27:27.949857 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:27:27.949868 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:27:27.949879 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:27:27.949889 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:27:27.949900 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:27:27.949911 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:27:27.949921 | orchestrator | changed: [testbed-manager] 2025-09-11 00:27:27.949941 | orchestrator | 2025-09-11 00:27:27.949952 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-09-11 00:27:27.949963 | orchestrator | Thursday 11 September 2025 00:27:22 +0000 (0:00:10.812) 0:00:59.416 **** 2025-09-11 00:27:27.949973 | orchestrator | ok: [testbed-manager] 2025-09-11 00:27:27.949984 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:27:27.949995 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:27:27.950006 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:27:27.950077 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:27:27.950090 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:27:27.950101 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:27:27.950111 | orchestrator | 2025-09-11 00:27:27.950123 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-09-11 00:27:27.950134 | orchestrator | Thursday 11 September 2025 00:27:23 +0000 (0:00:01.444) 0:01:00.860 **** 2025-09-11 00:27:27.950145 | orchestrator | ok: [testbed-manager] 2025-09-11 00:27:27.950155 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:27:27.950166 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:27:27.950177 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:27:27.950187 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:27:27.950198 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:27:27.950209 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:27:27.950219 | orchestrator | 2025-09-11 00:27:27.950230 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-09-11 00:27:27.950241 | orchestrator | Thursday 11 September 2025 00:27:24 +0000 (0:00:00.978) 0:01:01.839 **** 2025-09-11 00:27:27.950252 | orchestrator | ok: [testbed-manager] 2025-09-11 00:27:27.950263 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:27:27.950273 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:27:27.950284 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:27:27.950295 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:27:27.950306 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:27:27.950317 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:27:27.950327 | orchestrator | 2025-09-11 00:27:27.950338 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-09-11 00:27:27.950349 | orchestrator | Thursday 11 September 2025 00:27:24 +0000 (0:00:00.212) 0:01:02.051 **** 2025-09-11 00:27:27.950360 | orchestrator | ok: [testbed-manager] 2025-09-11 00:27:27.950405 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:27:27.950416 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:27:27.950427 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:27:27.950437 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:27:27.950448 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:27:27.950458 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:27:27.950469 | orchestrator | 2025-09-11 00:27:27.950479 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-09-11 00:27:27.950490 | orchestrator | Thursday 11 September 2025 00:27:25 +0000 (0:00:00.192) 0:01:02.243 **** 2025-09-11 00:27:27.950501 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:27:27.950512 | orchestrator | 2025-09-11 00:27:27.950523 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-09-11 00:27:27.950534 | orchestrator | Thursday 11 September 2025 00:27:25 +0000 (0:00:00.245) 0:01:02.489 **** 2025-09-11 00:27:27.950544 | orchestrator | ok: [testbed-manager] 2025-09-11 00:27:27.950555 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:27:27.950566 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:27:27.950582 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:27:27.950600 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:27:27.950617 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:27:27.950644 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:27:27.950663 | orchestrator | 2025-09-11 00:27:27.950680 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-09-11 00:27:27.950698 | orchestrator | Thursday 11 September 2025 00:27:27 +0000 (0:00:01.855) 0:01:04.345 **** 2025-09-11 00:27:27.950727 | orchestrator | changed: [testbed-manager] 2025-09-11 00:27:27.950745 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:27:27.950761 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:27:27.950778 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:27:27.950796 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:27:27.950813 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:27:27.950829 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:27:27.950846 | orchestrator | 2025-09-11 00:27:27.950863 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-09-11 00:27:27.950880 | orchestrator | Thursday 11 September 2025 00:27:27 +0000 (0:00:00.600) 0:01:04.945 **** 2025-09-11 00:27:27.950897 | orchestrator | ok: [testbed-manager] 2025-09-11 00:27:27.950914 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:27:27.950931 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:27:27.950950 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:27:27.950968 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:27:27.950986 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:27:27.951003 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:27:27.951019 | orchestrator | 2025-09-11 00:27:27.951050 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-09-11 00:29:47.339241 | orchestrator | Thursday 11 September 2025 00:27:27 +0000 (0:00:00.205) 0:01:05.150 **** 2025-09-11 00:29:47.339343 | orchestrator | ok: [testbed-manager] 2025-09-11 00:29:47.339415 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:29:47.339429 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:29:47.339440 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:29:47.339451 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:29:47.339462 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:29:47.339472 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:29:47.339483 | orchestrator | 2025-09-11 00:29:47.339494 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-09-11 00:29:47.339505 | orchestrator | Thursday 11 September 2025 00:27:29 +0000 (0:00:01.149) 0:01:06.300 **** 2025-09-11 00:29:47.339516 | orchestrator | changed: [testbed-manager] 2025-09-11 00:29:47.339528 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:29:47.339538 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:29:47.339549 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:29:47.339559 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:29:47.339570 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:29:47.339581 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:29:47.339591 | orchestrator | 2025-09-11 00:29:47.339603 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-09-11 00:29:47.339614 | orchestrator | Thursday 11 September 2025 00:27:31 +0000 (0:00:02.095) 0:01:08.396 **** 2025-09-11 00:29:47.339624 | orchestrator | ok: [testbed-manager] 2025-09-11 00:29:47.339635 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:29:47.339646 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:29:47.339656 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:29:47.339667 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:29:47.339677 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:29:47.339704 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:29:47.339715 | orchestrator | 2025-09-11 00:29:47.339726 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-09-11 00:29:47.339737 | orchestrator | Thursday 11 September 2025 00:27:37 +0000 (0:00:06.442) 0:01:14.838 **** 2025-09-11 00:29:47.339748 | orchestrator | ok: [testbed-manager] 2025-09-11 00:29:47.339759 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:29:47.339769 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:29:47.339781 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:29:47.339800 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:29:47.339829 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:29:47.339848 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:29:47.339866 | orchestrator | 2025-09-11 00:29:47.339884 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-09-11 00:29:47.339928 | orchestrator | Thursday 11 September 2025 00:28:17 +0000 (0:00:39.648) 0:01:54.487 **** 2025-09-11 00:29:47.339950 | orchestrator | changed: [testbed-manager] 2025-09-11 00:29:47.339968 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:29:47.339987 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:29:47.340005 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:29:47.340023 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:29:47.340041 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:29:47.340059 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:29:47.340078 | orchestrator | 2025-09-11 00:29:47.340104 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-09-11 00:29:47.340122 | orchestrator | Thursday 11 September 2025 00:29:34 +0000 (0:01:17.504) 0:03:11.991 **** 2025-09-11 00:29:47.340141 | orchestrator | ok: [testbed-manager] 2025-09-11 00:29:47.340161 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:29:47.340179 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:29:47.340199 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:29:47.340217 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:29:47.340236 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:29:47.340254 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:29:47.340273 | orchestrator | 2025-09-11 00:29:47.340290 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-09-11 00:29:47.340309 | orchestrator | Thursday 11 September 2025 00:29:36 +0000 (0:00:01.738) 0:03:13.730 **** 2025-09-11 00:29:47.340328 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:29:47.340348 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:29:47.340392 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:29:47.340403 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:29:47.340414 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:29:47.340424 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:29:47.340435 | orchestrator | changed: [testbed-manager] 2025-09-11 00:29:47.340445 | orchestrator | 2025-09-11 00:29:47.340456 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-09-11 00:29:47.340467 | orchestrator | Thursday 11 September 2025 00:29:46 +0000 (0:00:09.786) 0:03:23.516 **** 2025-09-11 00:29:47.340486 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-09-11 00:29:47.340508 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-09-11 00:29:47.340544 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-09-11 00:29:47.340557 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-09-11 00:29:47.340582 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-09-11 00:29:47.340592 | orchestrator | 2025-09-11 00:29:47.340604 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-09-11 00:29:47.340615 | orchestrator | Thursday 11 September 2025 00:29:46 +0000 (0:00:00.288) 0:03:23.804 **** 2025-09-11 00:29:47.340635 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-11 00:29:47.340652 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:29:47.340669 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-11 00:29:47.340687 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-11 00:29:47.340705 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:29:47.340724 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-11 00:29:47.340739 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:29:47.340750 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:29:47.340761 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-11 00:29:47.340772 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-11 00:29:47.340782 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-11 00:29:47.340793 | orchestrator | 2025-09-11 00:29:47.340804 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-09-11 00:29:47.340820 | orchestrator | Thursday 11 September 2025 00:29:47 +0000 (0:00:00.645) 0:03:24.450 **** 2025-09-11 00:29:47.340831 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-11 00:29:47.340843 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-11 00:29:47.340853 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-11 00:29:47.340864 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-11 00:29:47.340875 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-11 00:29:47.340885 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-11 00:29:47.340896 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-11 00:29:47.340907 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-11 00:29:47.340918 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-11 00:29:47.340928 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-11 00:29:47.340939 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-11 00:29:47.340950 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-11 00:29:47.340961 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-11 00:29:47.340972 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-11 00:29:47.340982 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-11 00:29:47.340993 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-11 00:29:47.341003 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-11 00:29:47.341022 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-11 00:29:47.341033 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-11 00:29:47.341043 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-11 00:29:47.341065 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-11 00:29:54.694153 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-11 00:29:54.694261 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-11 00:29:54.694276 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:29:54.694290 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-11 00:29:54.694302 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-11 00:29:54.694312 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-11 00:29:54.694323 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-11 00:29:54.694334 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-11 00:29:54.694345 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-11 00:29:54.694406 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-11 00:29:54.694419 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-11 00:29:54.694430 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-11 00:29:54.694440 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-11 00:29:54.694451 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-11 00:29:54.694462 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:29:54.694473 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-11 00:29:54.694483 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-11 00:29:54.694494 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-11 00:29:54.694505 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-11 00:29:54.694516 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-11 00:29:54.694543 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-11 00:29:54.694555 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:29:54.694566 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:29:54.694577 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-11 00:29:54.694587 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-11 00:29:54.694598 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-11 00:29:54.694609 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-11 00:29:54.694621 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-11 00:29:54.694635 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-11 00:29:54.694647 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-11 00:29:54.694681 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-11 00:29:54.694694 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-11 00:29:54.694706 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-11 00:29:54.694719 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-11 00:29:54.694731 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-11 00:29:54.694744 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-11 00:29:54.694756 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-11 00:29:54.694768 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-11 00:29:54.694780 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-11 00:29:54.694793 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-11 00:29:54.694805 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-11 00:29:54.694817 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-11 00:29:54.694829 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-11 00:29:54.694842 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-11 00:29:54.694873 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-11 00:29:54.694886 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-11 00:29:54.694899 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-11 00:29:54.694911 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-11 00:29:54.694923 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-11 00:29:54.694935 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-11 00:29:54.694947 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-11 00:29:54.694960 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-11 00:29:54.694971 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-11 00:29:54.694982 | orchestrator | 2025-09-11 00:29:54.694993 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-09-11 00:29:54.695004 | orchestrator | Thursday 11 September 2025 00:29:51 +0000 (0:00:04.676) 0:03:29.126 **** 2025-09-11 00:29:54.695015 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-11 00:29:54.695025 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-11 00:29:54.695036 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-11 00:29:54.695047 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-11 00:29:54.695058 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-11 00:29:54.695068 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-11 00:29:54.695079 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-11 00:29:54.695090 | orchestrator | 2025-09-11 00:29:54.695101 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-09-11 00:29:54.695119 | orchestrator | Thursday 11 September 2025 00:29:53 +0000 (0:00:01.505) 0:03:30.632 **** 2025-09-11 00:29:54.695130 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-11 00:29:54.695141 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:29:54.695152 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-11 00:29:54.695171 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-11 00:29:54.695182 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:29:54.695193 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:29:54.695204 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-11 00:29:54.695215 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:29:54.695226 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-11 00:29:54.695236 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-11 00:29:54.695247 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-11 00:29:54.695258 | orchestrator | 2025-09-11 00:29:54.695268 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-09-11 00:29:54.695279 | orchestrator | Thursday 11 September 2025 00:29:53 +0000 (0:00:00.471) 0:03:31.104 **** 2025-09-11 00:29:54.695290 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-11 00:29:54.695300 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:29:54.695316 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-11 00:29:54.695334 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-11 00:29:54.695352 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:29:54.695408 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:29:54.695425 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-11 00:29:54.695441 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:29:54.695458 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-11 00:29:54.695474 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-11 00:29:54.695490 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-11 00:29:54.695508 | orchestrator | 2025-09-11 00:29:54.695525 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-09-11 00:29:54.695544 | orchestrator | Thursday 11 September 2025 00:29:54 +0000 (0:00:00.532) 0:03:31.636 **** 2025-09-11 00:29:54.695562 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:29:54.695581 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:29:54.695599 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:29:54.695614 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:29:54.695634 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:30:06.418596 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:30:06.418710 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:30:06.418726 | orchestrator | 2025-09-11 00:30:06.418739 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-09-11 00:30:06.418751 | orchestrator | Thursday 11 September 2025 00:29:54 +0000 (0:00:00.264) 0:03:31.900 **** 2025-09-11 00:30:06.418762 | orchestrator | ok: [testbed-manager] 2025-09-11 00:30:06.418776 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:30:06.418787 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:30:06.418798 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:30:06.418838 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:30:06.418850 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:30:06.418861 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:30:06.418871 | orchestrator | 2025-09-11 00:30:06.418882 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-09-11 00:30:06.418893 | orchestrator | Thursday 11 September 2025 00:30:00 +0000 (0:00:05.638) 0:03:37.539 **** 2025-09-11 00:30:06.418904 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-09-11 00:30:06.418916 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-09-11 00:30:06.418926 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:30:06.418937 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:30:06.418948 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-09-11 00:30:06.418959 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-09-11 00:30:06.418969 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:30:06.418980 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:30:06.418991 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-09-11 00:30:06.419001 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-09-11 00:30:06.419012 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:30:06.419027 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:30:06.419038 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-09-11 00:30:06.419048 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:30:06.419059 | orchestrator | 2025-09-11 00:30:06.419070 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-09-11 00:30:06.419080 | orchestrator | Thursday 11 September 2025 00:30:00 +0000 (0:00:00.254) 0:03:37.793 **** 2025-09-11 00:30:06.419091 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-09-11 00:30:06.419102 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-09-11 00:30:06.419113 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-09-11 00:30:06.419123 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-09-11 00:30:06.419134 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-09-11 00:30:06.419144 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-09-11 00:30:06.419155 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-09-11 00:30:06.419165 | orchestrator | 2025-09-11 00:30:06.419176 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-09-11 00:30:06.419187 | orchestrator | Thursday 11 September 2025 00:30:01 +0000 (0:00:01.107) 0:03:38.901 **** 2025-09-11 00:30:06.419214 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:30:06.419228 | orchestrator | 2025-09-11 00:30:06.419239 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-09-11 00:30:06.419250 | orchestrator | Thursday 11 September 2025 00:30:02 +0000 (0:00:00.501) 0:03:39.403 **** 2025-09-11 00:30:06.419261 | orchestrator | ok: [testbed-manager] 2025-09-11 00:30:06.419272 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:30:06.419282 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:30:06.419293 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:30:06.419303 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:30:06.419314 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:30:06.419325 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:30:06.419335 | orchestrator | 2025-09-11 00:30:06.419346 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-09-11 00:30:06.419380 | orchestrator | Thursday 11 September 2025 00:30:03 +0000 (0:00:01.356) 0:03:40.760 **** 2025-09-11 00:30:06.419392 | orchestrator | ok: [testbed-manager] 2025-09-11 00:30:06.419402 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:30:06.419413 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:30:06.419424 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:30:06.419435 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:30:06.419445 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:30:06.419464 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:30:06.419475 | orchestrator | 2025-09-11 00:30:06.419486 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-09-11 00:30:06.419497 | orchestrator | Thursday 11 September 2025 00:30:04 +0000 (0:00:00.632) 0:03:41.392 **** 2025-09-11 00:30:06.419507 | orchestrator | changed: [testbed-manager] 2025-09-11 00:30:06.419518 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:30:06.419529 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:30:06.419539 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:30:06.419550 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:30:06.419560 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:30:06.419571 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:30:06.419581 | orchestrator | 2025-09-11 00:30:06.419592 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-09-11 00:30:06.419603 | orchestrator | Thursday 11 September 2025 00:30:04 +0000 (0:00:00.622) 0:03:42.015 **** 2025-09-11 00:30:06.419613 | orchestrator | ok: [testbed-manager] 2025-09-11 00:30:06.419624 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:30:06.419634 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:30:06.419645 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:30:06.419656 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:30:06.419666 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:30:06.419676 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:30:06.419687 | orchestrator | 2025-09-11 00:30:06.419698 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-09-11 00:30:06.419709 | orchestrator | Thursday 11 September 2025 00:30:05 +0000 (0:00:00.624) 0:03:42.640 **** 2025-09-11 00:30:06.419749 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757549027.7626119, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-11 00:30:06.419766 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757549060.2600594, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-11 00:30:06.419778 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757549060.786559, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-11 00:30:06.419795 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757549063.657175, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-11 00:30:06.419808 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757549059.5160542, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-11 00:30:06.419826 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757549054.6754584, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-11 00:30:06.419837 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757549055.8674738, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-11 00:30:06.419857 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-11 00:30:23.188766 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-11 00:30:23.188888 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-11 00:30:23.188905 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-11 00:30:23.188917 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-11 00:30:23.188953 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-11 00:30:23.188966 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-11 00:30:23.188977 | orchestrator | 2025-09-11 00:30:23.188991 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-09-11 00:30:23.189004 | orchestrator | Thursday 11 September 2025 00:30:06 +0000 (0:00:00.976) 0:03:43.616 **** 2025-09-11 00:30:23.189015 | orchestrator | changed: [testbed-manager] 2025-09-11 00:30:23.189027 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:30:23.189037 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:30:23.189048 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:30:23.189058 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:30:23.189069 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:30:23.189079 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:30:23.189090 | orchestrator | 2025-09-11 00:30:23.189101 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-09-11 00:30:23.189112 | orchestrator | Thursday 11 September 2025 00:30:07 +0000 (0:00:01.120) 0:03:44.737 **** 2025-09-11 00:30:23.189122 | orchestrator | changed: [testbed-manager] 2025-09-11 00:30:23.189133 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:30:23.189143 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:30:23.189154 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:30:23.189180 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:30:23.189192 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:30:23.189202 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:30:23.189213 | orchestrator | 2025-09-11 00:30:23.189224 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-09-11 00:30:23.189234 | orchestrator | Thursday 11 September 2025 00:30:08 +0000 (0:00:01.149) 0:03:45.886 **** 2025-09-11 00:30:23.189245 | orchestrator | changed: [testbed-manager] 2025-09-11 00:30:23.189256 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:30:23.189266 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:30:23.189277 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:30:23.189303 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:30:23.189314 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:30:23.189325 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:30:23.189335 | orchestrator | 2025-09-11 00:30:23.189346 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-09-11 00:30:23.189357 | orchestrator | Thursday 11 September 2025 00:30:09 +0000 (0:00:01.053) 0:03:46.940 **** 2025-09-11 00:30:23.189404 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:30:23.189415 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:30:23.189426 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:30:23.189436 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:30:23.189447 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:30:23.189457 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:30:23.189468 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:30:23.189478 | orchestrator | 2025-09-11 00:30:23.189489 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-09-11 00:30:23.189500 | orchestrator | Thursday 11 September 2025 00:30:09 +0000 (0:00:00.240) 0:03:47.180 **** 2025-09-11 00:30:23.189511 | orchestrator | ok: [testbed-manager] 2025-09-11 00:30:23.189523 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:30:23.189534 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:30:23.189544 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:30:23.189555 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:30:23.189566 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:30:23.189576 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:30:23.189587 | orchestrator | 2025-09-11 00:30:23.189598 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-09-11 00:30:23.189609 | orchestrator | Thursday 11 September 2025 00:30:10 +0000 (0:00:00.652) 0:03:47.833 **** 2025-09-11 00:30:23.189626 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:30:23.189640 | orchestrator | 2025-09-11 00:30:23.189651 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-09-11 00:30:23.189662 | orchestrator | Thursday 11 September 2025 00:30:10 +0000 (0:00:00.331) 0:03:48.164 **** 2025-09-11 00:30:23.189672 | orchestrator | ok: [testbed-manager] 2025-09-11 00:30:23.189683 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:30:23.189693 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:30:23.189704 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:30:23.189714 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:30:23.189725 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:30:23.189735 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:30:23.189746 | orchestrator | 2025-09-11 00:30:23.189756 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-09-11 00:30:23.189767 | orchestrator | Thursday 11 September 2025 00:30:19 +0000 (0:00:08.184) 0:03:56.349 **** 2025-09-11 00:30:23.189778 | orchestrator | ok: [testbed-manager] 2025-09-11 00:30:23.189788 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:30:23.189799 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:30:23.189809 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:30:23.189820 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:30:23.189830 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:30:23.189841 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:30:23.189852 | orchestrator | 2025-09-11 00:30:23.189862 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-09-11 00:30:23.189873 | orchestrator | Thursday 11 September 2025 00:30:20 +0000 (0:00:01.248) 0:03:57.598 **** 2025-09-11 00:30:23.189884 | orchestrator | ok: [testbed-manager] 2025-09-11 00:30:23.189894 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:30:23.189905 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:30:23.189915 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:30:23.189926 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:30:23.189936 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:30:23.189947 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:30:23.189957 | orchestrator | 2025-09-11 00:30:23.189968 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-09-11 00:30:23.189979 | orchestrator | Thursday 11 September 2025 00:30:22 +0000 (0:00:01.835) 0:03:59.434 **** 2025-09-11 00:30:23.189989 | orchestrator | ok: [testbed-manager] 2025-09-11 00:30:23.190006 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:30:23.190069 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:30:23.190081 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:30:23.190092 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:30:23.190102 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:30:23.190113 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:30:23.190123 | orchestrator | 2025-09-11 00:30:23.190134 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-09-11 00:30:23.190146 | orchestrator | Thursday 11 September 2025 00:30:22 +0000 (0:00:00.271) 0:03:59.706 **** 2025-09-11 00:30:23.190156 | orchestrator | ok: [testbed-manager] 2025-09-11 00:30:23.190167 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:30:23.190177 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:30:23.190188 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:30:23.190198 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:30:23.190209 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:30:23.190219 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:30:23.190229 | orchestrator | 2025-09-11 00:30:23.190240 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-09-11 00:30:23.190251 | orchestrator | Thursday 11 September 2025 00:30:22 +0000 (0:00:00.402) 0:04:00.108 **** 2025-09-11 00:30:23.190262 | orchestrator | ok: [testbed-manager] 2025-09-11 00:30:23.190272 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:30:23.190282 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:30:23.190293 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:30:23.190303 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:30:23.190322 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:31:32.392559 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:31:32.392669 | orchestrator | 2025-09-11 00:31:32.392686 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-09-11 00:31:32.392701 | orchestrator | Thursday 11 September 2025 00:30:23 +0000 (0:00:00.282) 0:04:00.391 **** 2025-09-11 00:31:32.392712 | orchestrator | ok: [testbed-manager] 2025-09-11 00:31:32.392723 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:31:32.392734 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:31:32.392745 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:31:32.392756 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:31:32.392766 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:31:32.392777 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:31:32.392788 | orchestrator | 2025-09-11 00:31:32.392799 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-09-11 00:31:32.392810 | orchestrator | Thursday 11 September 2025 00:30:29 +0000 (0:00:05.849) 0:04:06.240 **** 2025-09-11 00:31:32.392824 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:31:32.392837 | orchestrator | 2025-09-11 00:31:32.392848 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-09-11 00:31:32.392859 | orchestrator | Thursday 11 September 2025 00:30:29 +0000 (0:00:00.380) 0:04:06.620 **** 2025-09-11 00:31:32.392870 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-09-11 00:31:32.392881 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-09-11 00:31:32.392892 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-09-11 00:31:32.392903 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-09-11 00:31:32.392914 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:31:32.392925 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-09-11 00:31:32.392936 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-09-11 00:31:32.392947 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:31:32.392957 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:31:32.392968 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-09-11 00:31:32.392979 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-09-11 00:31:32.393016 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-09-11 00:31:32.393042 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-09-11 00:31:32.393053 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:31:32.393064 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:31:32.393074 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-09-11 00:31:32.393085 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-09-11 00:31:32.393097 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:31:32.393109 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-09-11 00:31:32.393122 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-09-11 00:31:32.393136 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:31:32.393148 | orchestrator | 2025-09-11 00:31:32.393161 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-09-11 00:31:32.393173 | orchestrator | Thursday 11 September 2025 00:30:29 +0000 (0:00:00.303) 0:04:06.924 **** 2025-09-11 00:31:32.393185 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:31:32.393199 | orchestrator | 2025-09-11 00:31:32.393211 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-09-11 00:31:32.393225 | orchestrator | Thursday 11 September 2025 00:30:30 +0000 (0:00:00.381) 0:04:07.306 **** 2025-09-11 00:31:32.393238 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-09-11 00:31:32.393250 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:31:32.393262 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-09-11 00:31:32.393274 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:31:32.393287 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-09-11 00:31:32.393299 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-09-11 00:31:32.393311 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:31:32.393323 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-09-11 00:31:32.393336 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:31:32.393348 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-09-11 00:31:32.393360 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:31:32.393373 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:31:32.393414 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-09-11 00:31:32.393428 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:31:32.393440 | orchestrator | 2025-09-11 00:31:32.393453 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-09-11 00:31:32.393465 | orchestrator | Thursday 11 September 2025 00:30:30 +0000 (0:00:00.283) 0:04:07.589 **** 2025-09-11 00:31:32.393476 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:31:32.393487 | orchestrator | 2025-09-11 00:31:32.393497 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-09-11 00:31:32.393508 | orchestrator | Thursday 11 September 2025 00:30:30 +0000 (0:00:00.376) 0:04:07.965 **** 2025-09-11 00:31:32.393519 | orchestrator | changed: [testbed-manager] 2025-09-11 00:31:32.393547 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:31:32.393559 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:31:32.393570 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:31:32.393581 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:31:32.393592 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:31:32.393602 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:31:32.393613 | orchestrator | 2025-09-11 00:31:32.393624 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-09-11 00:31:32.393644 | orchestrator | Thursday 11 September 2025 00:31:04 +0000 (0:00:33.669) 0:04:41.634 **** 2025-09-11 00:31:32.393655 | orchestrator | changed: [testbed-manager] 2025-09-11 00:31:32.393666 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:31:32.393676 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:31:32.393687 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:31:32.393698 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:31:32.393708 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:31:32.393719 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:31:32.393730 | orchestrator | 2025-09-11 00:31:32.393740 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-09-11 00:31:32.393751 | orchestrator | Thursday 11 September 2025 00:31:12 +0000 (0:00:08.468) 0:04:50.103 **** 2025-09-11 00:31:32.393762 | orchestrator | changed: [testbed-manager] 2025-09-11 00:31:32.393773 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:31:32.393783 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:31:32.393794 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:31:32.393804 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:31:32.393815 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:31:32.393826 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:31:32.393836 | orchestrator | 2025-09-11 00:31:32.393847 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-09-11 00:31:32.393858 | orchestrator | Thursday 11 September 2025 00:31:20 +0000 (0:00:07.951) 0:04:58.054 **** 2025-09-11 00:31:32.393869 | orchestrator | ok: [testbed-manager] 2025-09-11 00:31:32.393879 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:31:32.393890 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:31:32.393901 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:31:32.393912 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:31:32.393922 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:31:32.393933 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:31:32.393944 | orchestrator | 2025-09-11 00:31:32.393954 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-09-11 00:31:32.393966 | orchestrator | Thursday 11 September 2025 00:31:22 +0000 (0:00:01.799) 0:04:59.853 **** 2025-09-11 00:31:32.393976 | orchestrator | changed: [testbed-manager] 2025-09-11 00:31:32.393987 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:31:32.394003 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:31:32.394014 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:31:32.394076 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:31:32.394087 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:31:32.394098 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:31:32.394108 | orchestrator | 2025-09-11 00:31:32.394119 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-09-11 00:31:32.394130 | orchestrator | Thursday 11 September 2025 00:31:28 +0000 (0:00:05.743) 0:05:05.597 **** 2025-09-11 00:31:32.394141 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:31:32.394154 | orchestrator | 2025-09-11 00:31:32.394165 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-09-11 00:31:32.394176 | orchestrator | Thursday 11 September 2025 00:31:28 +0000 (0:00:00.499) 0:05:06.096 **** 2025-09-11 00:31:32.394187 | orchestrator | changed: [testbed-manager] 2025-09-11 00:31:32.394197 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:31:32.394208 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:31:32.394219 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:31:32.394229 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:31:32.394240 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:31:32.394250 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:31:32.394261 | orchestrator | 2025-09-11 00:31:32.394272 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-09-11 00:31:32.394290 | orchestrator | Thursday 11 September 2025 00:31:29 +0000 (0:00:00.689) 0:05:06.786 **** 2025-09-11 00:31:32.394301 | orchestrator | ok: [testbed-manager] 2025-09-11 00:31:32.394311 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:31:32.394322 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:31:32.394333 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:31:32.394343 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:31:32.394354 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:31:32.394365 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:31:32.394375 | orchestrator | 2025-09-11 00:31:32.394402 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-09-11 00:31:32.394414 | orchestrator | Thursday 11 September 2025 00:31:31 +0000 (0:00:01.765) 0:05:08.552 **** 2025-09-11 00:31:32.394425 | orchestrator | changed: [testbed-manager] 2025-09-11 00:31:32.394436 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:31:32.394446 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:31:32.394457 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:31:32.394467 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:31:32.394478 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:31:32.394488 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:31:32.394499 | orchestrator | 2025-09-11 00:31:32.394510 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-09-11 00:31:32.394521 | orchestrator | Thursday 11 September 2025 00:31:32 +0000 (0:00:00.768) 0:05:09.320 **** 2025-09-11 00:31:32.394531 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:31:32.394542 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:31:32.394553 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:31:32.394563 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:31:32.394574 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:31:32.394584 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:31:32.394595 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:31:32.394606 | orchestrator | 2025-09-11 00:31:32.394616 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-09-11 00:31:32.394634 | orchestrator | Thursday 11 September 2025 00:31:32 +0000 (0:00:00.271) 0:05:09.591 **** 2025-09-11 00:31:58.714457 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:31:58.714560 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:31:58.714575 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:31:58.714587 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:31:58.714599 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:31:58.714610 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:31:58.714622 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:31:58.714634 | orchestrator | 2025-09-11 00:31:58.714649 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-09-11 00:31:58.714662 | orchestrator | Thursday 11 September 2025 00:31:32 +0000 (0:00:00.418) 0:05:10.010 **** 2025-09-11 00:31:58.714673 | orchestrator | ok: [testbed-manager] 2025-09-11 00:31:58.714681 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:31:58.714689 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:31:58.714696 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:31:58.714703 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:31:58.714711 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:31:58.714718 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:31:58.714725 | orchestrator | 2025-09-11 00:31:58.714733 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-09-11 00:31:58.714740 | orchestrator | Thursday 11 September 2025 00:31:33 +0000 (0:00:00.275) 0:05:10.285 **** 2025-09-11 00:31:58.714748 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:31:58.714755 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:31:58.714762 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:31:58.714769 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:31:58.714776 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:31:58.714783 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:31:58.714791 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:31:58.714820 | orchestrator | 2025-09-11 00:31:58.714827 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-09-11 00:31:58.714835 | orchestrator | Thursday 11 September 2025 00:31:33 +0000 (0:00:00.258) 0:05:10.544 **** 2025-09-11 00:31:58.714842 | orchestrator | ok: [testbed-manager] 2025-09-11 00:31:58.714850 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:31:58.714857 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:31:58.714864 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:31:58.714871 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:31:58.714878 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:31:58.714885 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:31:58.714892 | orchestrator | 2025-09-11 00:31:58.714900 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-09-11 00:31:58.714907 | orchestrator | Thursday 11 September 2025 00:31:33 +0000 (0:00:00.274) 0:05:10.818 **** 2025-09-11 00:31:58.714914 | orchestrator | ok: [testbed-manager] =>  2025-09-11 00:31:58.714921 | orchestrator |  docker_version: 5:27.5.1 2025-09-11 00:31:58.714929 | orchestrator | ok: [testbed-node-3] =>  2025-09-11 00:31:58.714936 | orchestrator |  docker_version: 5:27.5.1 2025-09-11 00:31:58.714943 | orchestrator | ok: [testbed-node-4] =>  2025-09-11 00:31:58.714950 | orchestrator |  docker_version: 5:27.5.1 2025-09-11 00:31:58.714957 | orchestrator | ok: [testbed-node-5] =>  2025-09-11 00:31:58.714965 | orchestrator |  docker_version: 5:27.5.1 2025-09-11 00:31:58.714972 | orchestrator | ok: [testbed-node-0] =>  2025-09-11 00:31:58.714980 | orchestrator |  docker_version: 5:27.5.1 2025-09-11 00:31:58.714989 | orchestrator | ok: [testbed-node-1] =>  2025-09-11 00:31:58.714997 | orchestrator |  docker_version: 5:27.5.1 2025-09-11 00:31:58.715006 | orchestrator | ok: [testbed-node-2] =>  2025-09-11 00:31:58.715014 | orchestrator |  docker_version: 5:27.5.1 2025-09-11 00:31:58.715022 | orchestrator | 2025-09-11 00:31:58.715031 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-09-11 00:31:58.715039 | orchestrator | Thursday 11 September 2025 00:31:33 +0000 (0:00:00.271) 0:05:11.090 **** 2025-09-11 00:31:58.715048 | orchestrator | ok: [testbed-manager] =>  2025-09-11 00:31:58.715056 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-11 00:31:58.715064 | orchestrator | ok: [testbed-node-3] =>  2025-09-11 00:31:58.715072 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-11 00:31:58.715080 | orchestrator | ok: [testbed-node-4] =>  2025-09-11 00:31:58.715088 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-11 00:31:58.715096 | orchestrator | ok: [testbed-node-5] =>  2025-09-11 00:31:58.715104 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-11 00:31:58.715113 | orchestrator | ok: [testbed-node-0] =>  2025-09-11 00:31:58.715121 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-11 00:31:58.715130 | orchestrator | ok: [testbed-node-1] =>  2025-09-11 00:31:58.715138 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-11 00:31:58.715145 | orchestrator | ok: [testbed-node-2] =>  2025-09-11 00:31:58.715152 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-11 00:31:58.715159 | orchestrator | 2025-09-11 00:31:58.715166 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-09-11 00:31:58.715174 | orchestrator | Thursday 11 September 2025 00:31:34 +0000 (0:00:00.282) 0:05:11.373 **** 2025-09-11 00:31:58.715182 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:31:58.715190 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:31:58.715197 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:31:58.715205 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:31:58.715213 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:31:58.715221 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:31:58.715228 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:31:58.715236 | orchestrator | 2025-09-11 00:31:58.715244 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-09-11 00:31:58.715252 | orchestrator | Thursday 11 September 2025 00:31:34 +0000 (0:00:00.259) 0:05:11.632 **** 2025-09-11 00:31:58.715260 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:31:58.715274 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:31:58.715282 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:31:58.715289 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:31:58.715297 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:31:58.715305 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:31:58.715312 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:31:58.715320 | orchestrator | 2025-09-11 00:31:58.715328 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-09-11 00:31:58.715336 | orchestrator | Thursday 11 September 2025 00:31:34 +0000 (0:00:00.261) 0:05:11.893 **** 2025-09-11 00:31:58.715359 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:31:58.715370 | orchestrator | 2025-09-11 00:31:58.715378 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-09-11 00:31:58.715386 | orchestrator | Thursday 11 September 2025 00:31:35 +0000 (0:00:00.391) 0:05:12.285 **** 2025-09-11 00:31:58.715415 | orchestrator | ok: [testbed-manager] 2025-09-11 00:31:58.715424 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:31:58.715432 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:31:58.715439 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:31:58.715447 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:31:58.715455 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:31:58.715462 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:31:58.715470 | orchestrator | 2025-09-11 00:31:58.715478 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-09-11 00:31:58.715486 | orchestrator | Thursday 11 September 2025 00:31:35 +0000 (0:00:00.801) 0:05:13.087 **** 2025-09-11 00:31:58.715494 | orchestrator | ok: [testbed-manager] 2025-09-11 00:31:58.715502 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:31:58.715509 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:31:58.715517 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:31:58.715525 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:31:58.715532 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:31:58.715554 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:31:58.715563 | orchestrator | 2025-09-11 00:31:58.715571 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-09-11 00:31:58.715580 | orchestrator | Thursday 11 September 2025 00:31:39 +0000 (0:00:03.211) 0:05:16.299 **** 2025-09-11 00:31:58.715588 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-09-11 00:31:58.715596 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-09-11 00:31:58.715603 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-09-11 00:31:58.715611 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-09-11 00:31:58.715619 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-09-11 00:31:58.715627 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:31:58.715635 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-09-11 00:31:58.715643 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-09-11 00:31:58.715651 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-09-11 00:31:58.715658 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-09-11 00:31:58.715666 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:31:58.715674 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-09-11 00:31:58.715685 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-09-11 00:31:58.715693 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-09-11 00:31:58.715701 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:31:58.715709 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-09-11 00:31:58.715716 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-09-11 00:31:58.715724 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-09-11 00:31:58.715738 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:31:58.715746 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-09-11 00:31:58.715753 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-09-11 00:31:58.715761 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-09-11 00:31:58.715769 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:31:58.715777 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:31:58.715784 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-09-11 00:31:58.715792 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-09-11 00:31:58.715800 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-09-11 00:31:58.715808 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:31:58.715815 | orchestrator | 2025-09-11 00:31:58.715823 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-09-11 00:31:58.715831 | orchestrator | Thursday 11 September 2025 00:31:39 +0000 (0:00:00.611) 0:05:16.910 **** 2025-09-11 00:31:58.715839 | orchestrator | ok: [testbed-manager] 2025-09-11 00:31:58.715846 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:31:58.715854 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:31:58.715862 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:31:58.715870 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:31:58.715877 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:31:58.715885 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:31:58.715893 | orchestrator | 2025-09-11 00:31:58.715900 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-09-11 00:31:58.715908 | orchestrator | Thursday 11 September 2025 00:31:46 +0000 (0:00:06.482) 0:05:23.392 **** 2025-09-11 00:31:58.715916 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:31:58.715923 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:31:58.715931 | orchestrator | ok: [testbed-manager] 2025-09-11 00:31:58.715939 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:31:58.715947 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:31:58.715954 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:31:58.715962 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:31:58.715969 | orchestrator | 2025-09-11 00:31:58.715977 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-09-11 00:31:58.715985 | orchestrator | Thursday 11 September 2025 00:31:47 +0000 (0:00:01.193) 0:05:24.586 **** 2025-09-11 00:31:58.715993 | orchestrator | ok: [testbed-manager] 2025-09-11 00:31:58.716000 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:31:58.716008 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:31:58.716016 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:31:58.716023 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:31:58.716031 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:31:58.716039 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:31:58.716046 | orchestrator | 2025-09-11 00:31:58.716054 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-09-11 00:31:58.716062 | orchestrator | Thursday 11 September 2025 00:31:55 +0000 (0:00:08.063) 0:05:32.650 **** 2025-09-11 00:31:58.716070 | orchestrator | changed: [testbed-manager] 2025-09-11 00:31:58.716077 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:31:58.716085 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:31:58.716098 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:32:40.710812 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:32:40.710925 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:32:40.710941 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:32:40.710953 | orchestrator | 2025-09-11 00:32:40.710966 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-09-11 00:32:40.710979 | orchestrator | Thursday 11 September 2025 00:31:58 +0000 (0:00:03.264) 0:05:35.914 **** 2025-09-11 00:32:40.710990 | orchestrator | ok: [testbed-manager] 2025-09-11 00:32:40.711001 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:32:40.711012 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:32:40.711047 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:32:40.711058 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:32:40.711069 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:32:40.711080 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:32:40.711090 | orchestrator | 2025-09-11 00:32:40.711101 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-09-11 00:32:40.711112 | orchestrator | Thursday 11 September 2025 00:31:59 +0000 (0:00:01.261) 0:05:37.176 **** 2025-09-11 00:32:40.711123 | orchestrator | ok: [testbed-manager] 2025-09-11 00:32:40.711133 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:32:40.711143 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:32:40.711154 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:32:40.711164 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:32:40.711175 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:32:40.711185 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:32:40.711196 | orchestrator | 2025-09-11 00:32:40.711206 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-09-11 00:32:40.711217 | orchestrator | Thursday 11 September 2025 00:32:01 +0000 (0:00:01.257) 0:05:38.433 **** 2025-09-11 00:32:40.711227 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:32:40.711238 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:32:40.711248 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:32:40.711259 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:32:40.711269 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:32:40.711279 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:32:40.711290 | orchestrator | changed: [testbed-manager] 2025-09-11 00:32:40.711300 | orchestrator | 2025-09-11 00:32:40.711311 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-09-11 00:32:40.711322 | orchestrator | Thursday 11 September 2025 00:32:01 +0000 (0:00:00.707) 0:05:39.140 **** 2025-09-11 00:32:40.711334 | orchestrator | ok: [testbed-manager] 2025-09-11 00:32:40.711347 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:32:40.711360 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:32:40.711373 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:32:40.711426 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:32:40.711440 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:32:40.711453 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:32:40.711465 | orchestrator | 2025-09-11 00:32:40.711478 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-09-11 00:32:40.711491 | orchestrator | Thursday 11 September 2025 00:32:11 +0000 (0:00:09.500) 0:05:48.641 **** 2025-09-11 00:32:40.711503 | orchestrator | changed: [testbed-manager] 2025-09-11 00:32:40.711515 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:32:40.711527 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:32:40.711539 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:32:40.711551 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:32:40.711563 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:32:40.711575 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:32:40.711588 | orchestrator | 2025-09-11 00:32:40.711600 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-09-11 00:32:40.711613 | orchestrator | Thursday 11 September 2025 00:32:12 +0000 (0:00:00.833) 0:05:49.475 **** 2025-09-11 00:32:40.711625 | orchestrator | ok: [testbed-manager] 2025-09-11 00:32:40.711638 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:32:40.711650 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:32:40.711663 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:32:40.711675 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:32:40.711687 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:32:40.711697 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:32:40.711708 | orchestrator | 2025-09-11 00:32:40.711719 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-09-11 00:32:40.711729 | orchestrator | Thursday 11 September 2025 00:32:21 +0000 (0:00:09.032) 0:05:58.507 **** 2025-09-11 00:32:40.711750 | orchestrator | ok: [testbed-manager] 2025-09-11 00:32:40.711761 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:32:40.711771 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:32:40.711782 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:32:40.711792 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:32:40.711803 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:32:40.711813 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:32:40.711824 | orchestrator | 2025-09-11 00:32:40.711834 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-09-11 00:32:40.711845 | orchestrator | Thursday 11 September 2025 00:32:31 +0000 (0:00:10.606) 0:06:09.114 **** 2025-09-11 00:32:40.711856 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-09-11 00:32:40.711867 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-09-11 00:32:40.711877 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-09-11 00:32:40.711888 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-09-11 00:32:40.711899 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-09-11 00:32:40.711909 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-09-11 00:32:40.711920 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-09-11 00:32:40.711930 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-09-11 00:32:40.711941 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-09-11 00:32:40.711951 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-09-11 00:32:40.711962 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-09-11 00:32:40.711972 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-09-11 00:32:40.711983 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-09-11 00:32:40.711994 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-09-11 00:32:40.712004 | orchestrator | 2025-09-11 00:32:40.712015 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-09-11 00:32:40.712043 | orchestrator | Thursday 11 September 2025 00:32:33 +0000 (0:00:01.145) 0:06:10.260 **** 2025-09-11 00:32:40.712055 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:32:40.712066 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:32:40.712076 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:32:40.712087 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:32:40.712097 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:32:40.712108 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:32:40.712118 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:32:40.712129 | orchestrator | 2025-09-11 00:32:40.712140 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-09-11 00:32:40.712150 | orchestrator | Thursday 11 September 2025 00:32:33 +0000 (0:00:00.476) 0:06:10.736 **** 2025-09-11 00:32:40.712161 | orchestrator | ok: [testbed-manager] 2025-09-11 00:32:40.712172 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:32:40.712182 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:32:40.712192 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:32:40.712203 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:32:40.712213 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:32:40.712224 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:32:40.712234 | orchestrator | 2025-09-11 00:32:40.712245 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-09-11 00:32:40.712257 | orchestrator | Thursday 11 September 2025 00:32:36 +0000 (0:00:03.297) 0:06:14.034 **** 2025-09-11 00:32:40.712267 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:32:40.712278 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:32:40.712288 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:32:40.712299 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:32:40.712309 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:32:40.712320 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:32:40.712330 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:32:40.712347 | orchestrator | 2025-09-11 00:32:40.712359 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-09-11 00:32:40.712370 | orchestrator | Thursday 11 September 2025 00:32:37 +0000 (0:00:00.456) 0:06:14.491 **** 2025-09-11 00:32:40.712381 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-09-11 00:32:40.712392 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-09-11 00:32:40.712432 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:32:40.712443 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-09-11 00:32:40.712459 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-09-11 00:32:40.712470 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:32:40.712481 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-09-11 00:32:40.712491 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-09-11 00:32:40.712502 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:32:40.712512 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-09-11 00:32:40.712523 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-09-11 00:32:40.712534 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:32:40.712544 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-09-11 00:32:40.712555 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-09-11 00:32:40.712565 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:32:40.712576 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-09-11 00:32:40.712586 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-09-11 00:32:40.712597 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:32:40.712607 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-09-11 00:32:40.712618 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-09-11 00:32:40.712629 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:32:40.712639 | orchestrator | 2025-09-11 00:32:40.712650 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-09-11 00:32:40.712660 | orchestrator | Thursday 11 September 2025 00:32:37 +0000 (0:00:00.624) 0:06:15.115 **** 2025-09-11 00:32:40.712671 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:32:40.712681 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:32:40.712692 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:32:40.712702 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:32:40.712713 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:32:40.712723 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:32:40.712733 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:32:40.712744 | orchestrator | 2025-09-11 00:32:40.712755 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-09-11 00:32:40.712765 | orchestrator | Thursday 11 September 2025 00:32:38 +0000 (0:00:00.452) 0:06:15.567 **** 2025-09-11 00:32:40.712776 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:32:40.712786 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:32:40.712797 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:32:40.712807 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:32:40.712818 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:32:40.712828 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:32:40.712839 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:32:40.712849 | orchestrator | 2025-09-11 00:32:40.712860 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-09-11 00:32:40.712870 | orchestrator | Thursday 11 September 2025 00:32:38 +0000 (0:00:00.456) 0:06:16.024 **** 2025-09-11 00:32:40.712881 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:32:40.712891 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:32:40.712902 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:32:40.712912 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:32:40.712922 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:32:40.712939 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:32:40.712950 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:32:40.712960 | orchestrator | 2025-09-11 00:32:40.712971 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-09-11 00:32:40.712982 | orchestrator | Thursday 11 September 2025 00:32:39 +0000 (0:00:00.501) 0:06:16.525 **** 2025-09-11 00:32:40.712992 | orchestrator | ok: [testbed-manager] 2025-09-11 00:32:40.713010 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:33:00.369516 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:33:00.369627 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:33:00.369642 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:33:00.369653 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:33:00.369664 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:33:00.369675 | orchestrator | 2025-09-11 00:33:00.369688 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-09-11 00:33:00.369700 | orchestrator | Thursday 11 September 2025 00:32:40 +0000 (0:00:01.386) 0:06:17.912 **** 2025-09-11 00:33:00.369712 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:33:00.369726 | orchestrator | 2025-09-11 00:33:00.369737 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-09-11 00:33:00.369748 | orchestrator | Thursday 11 September 2025 00:32:41 +0000 (0:00:00.987) 0:06:18.900 **** 2025-09-11 00:33:00.369758 | orchestrator | ok: [testbed-manager] 2025-09-11 00:33:00.369769 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:33:00.369780 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:33:00.369791 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:33:00.369802 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:33:00.369812 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:33:00.369823 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:33:00.369834 | orchestrator | 2025-09-11 00:33:00.369844 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-09-11 00:33:00.369855 | orchestrator | Thursday 11 September 2025 00:32:42 +0000 (0:00:00.770) 0:06:19.670 **** 2025-09-11 00:33:00.369866 | orchestrator | ok: [testbed-manager] 2025-09-11 00:33:00.369876 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:33:00.369887 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:33:00.369897 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:33:00.369909 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:33:00.369920 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:33:00.369930 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:33:00.369941 | orchestrator | 2025-09-11 00:33:00.369952 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-09-11 00:33:00.369963 | orchestrator | Thursday 11 September 2025 00:32:43 +0000 (0:00:00.773) 0:06:20.444 **** 2025-09-11 00:33:00.369973 | orchestrator | ok: [testbed-manager] 2025-09-11 00:33:00.369984 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:33:00.369997 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:33:00.370093 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:33:00.370109 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:33:00.370122 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:33:00.370134 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:33:00.370147 | orchestrator | 2025-09-11 00:33:00.370160 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-09-11 00:33:00.370174 | orchestrator | Thursday 11 September 2025 00:32:44 +0000 (0:00:01.331) 0:06:21.776 **** 2025-09-11 00:33:00.370187 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:33:00.370199 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:33:00.370213 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:33:00.370225 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:33:00.370237 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:33:00.370251 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:33:00.370286 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:33:00.370300 | orchestrator | 2025-09-11 00:33:00.370312 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-09-11 00:33:00.370326 | orchestrator | Thursday 11 September 2025 00:32:45 +0000 (0:00:01.184) 0:06:22.961 **** 2025-09-11 00:33:00.370338 | orchestrator | ok: [testbed-manager] 2025-09-11 00:33:00.370351 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:33:00.370362 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:33:00.370372 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:33:00.370383 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:33:00.370394 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:33:00.370424 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:33:00.370435 | orchestrator | 2025-09-11 00:33:00.370446 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-09-11 00:33:00.370457 | orchestrator | Thursday 11 September 2025 00:32:46 +0000 (0:00:01.113) 0:06:24.075 **** 2025-09-11 00:33:00.370467 | orchestrator | changed: [testbed-manager] 2025-09-11 00:33:00.370478 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:33:00.370489 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:33:00.370499 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:33:00.370510 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:33:00.370521 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:33:00.370531 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:33:00.370542 | orchestrator | 2025-09-11 00:33:00.370553 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-09-11 00:33:00.370564 | orchestrator | Thursday 11 September 2025 00:32:48 +0000 (0:00:01.201) 0:06:25.276 **** 2025-09-11 00:33:00.370575 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:33:00.370586 | orchestrator | 2025-09-11 00:33:00.370597 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-09-11 00:33:00.370608 | orchestrator | Thursday 11 September 2025 00:32:48 +0000 (0:00:00.890) 0:06:26.166 **** 2025-09-11 00:33:00.370618 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:33:00.370629 | orchestrator | ok: [testbed-manager] 2025-09-11 00:33:00.370640 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:33:00.370651 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:33:00.370662 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:33:00.370673 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:33:00.370684 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:33:00.370694 | orchestrator | 2025-09-11 00:33:00.370705 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-09-11 00:33:00.370716 | orchestrator | Thursday 11 September 2025 00:32:50 +0000 (0:00:01.251) 0:06:27.418 **** 2025-09-11 00:33:00.370727 | orchestrator | ok: [testbed-manager] 2025-09-11 00:33:00.370738 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:33:00.370767 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:33:00.370779 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:33:00.370789 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:33:00.370800 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:33:00.370810 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:33:00.370821 | orchestrator | 2025-09-11 00:33:00.370832 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-09-11 00:33:00.370843 | orchestrator | Thursday 11 September 2025 00:32:51 +0000 (0:00:01.010) 0:06:28.428 **** 2025-09-11 00:33:00.370853 | orchestrator | ok: [testbed-manager] 2025-09-11 00:33:00.370864 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:33:00.370875 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:33:00.370885 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:33:00.370896 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:33:00.370906 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:33:00.370917 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:33:00.370928 | orchestrator | 2025-09-11 00:33:00.370939 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-09-11 00:33:00.370959 | orchestrator | Thursday 11 September 2025 00:32:52 +0000 (0:00:01.065) 0:06:29.493 **** 2025-09-11 00:33:00.370970 | orchestrator | ok: [testbed-manager] 2025-09-11 00:33:00.370980 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:33:00.370991 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:33:00.371001 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:33:00.371012 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:33:00.371023 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:33:00.371034 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:33:00.371045 | orchestrator | 2025-09-11 00:33:00.371055 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-09-11 00:33:00.371066 | orchestrator | Thursday 11 September 2025 00:32:53 +0000 (0:00:01.021) 0:06:30.515 **** 2025-09-11 00:33:00.371078 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:33:00.371089 | orchestrator | 2025-09-11 00:33:00.371100 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-11 00:33:00.371110 | orchestrator | Thursday 11 September 2025 00:32:54 +0000 (0:00:00.986) 0:06:31.502 **** 2025-09-11 00:33:00.371121 | orchestrator | 2025-09-11 00:33:00.371132 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-11 00:33:00.371143 | orchestrator | Thursday 11 September 2025 00:32:54 +0000 (0:00:00.037) 0:06:31.539 **** 2025-09-11 00:33:00.371153 | orchestrator | 2025-09-11 00:33:00.371165 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-11 00:33:00.371175 | orchestrator | Thursday 11 September 2025 00:32:54 +0000 (0:00:00.045) 0:06:31.585 **** 2025-09-11 00:33:00.371186 | orchestrator | 2025-09-11 00:33:00.371197 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-11 00:33:00.371208 | orchestrator | Thursday 11 September 2025 00:32:54 +0000 (0:00:00.037) 0:06:31.623 **** 2025-09-11 00:33:00.371218 | orchestrator | 2025-09-11 00:33:00.371229 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-11 00:33:00.371240 | orchestrator | Thursday 11 September 2025 00:32:54 +0000 (0:00:00.037) 0:06:31.661 **** 2025-09-11 00:33:00.371251 | orchestrator | 2025-09-11 00:33:00.371261 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-11 00:33:00.371272 | orchestrator | Thursday 11 September 2025 00:32:54 +0000 (0:00:00.045) 0:06:31.706 **** 2025-09-11 00:33:00.371283 | orchestrator | 2025-09-11 00:33:00.371294 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-11 00:33:00.371305 | orchestrator | Thursday 11 September 2025 00:32:54 +0000 (0:00:00.037) 0:06:31.744 **** 2025-09-11 00:33:00.371315 | orchestrator | 2025-09-11 00:33:00.371326 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-11 00:33:00.371337 | orchestrator | Thursday 11 September 2025 00:32:54 +0000 (0:00:00.039) 0:06:31.784 **** 2025-09-11 00:33:00.371347 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:33:00.371358 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:33:00.371369 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:33:00.371380 | orchestrator | 2025-09-11 00:33:00.371391 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-09-11 00:33:00.371416 | orchestrator | Thursday 11 September 2025 00:32:55 +0000 (0:00:00.952) 0:06:32.736 **** 2025-09-11 00:33:00.371427 | orchestrator | changed: [testbed-manager] 2025-09-11 00:33:00.371437 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:33:00.371448 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:33:00.371459 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:33:00.371469 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:33:00.371480 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:33:00.371491 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:33:00.371501 | orchestrator | 2025-09-11 00:33:00.371512 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-09-11 00:33:00.371540 | orchestrator | Thursday 11 September 2025 00:32:56 +0000 (0:00:01.299) 0:06:34.036 **** 2025-09-11 00:33:00.371552 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:33:00.371563 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:33:00.371573 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:33:00.371584 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:33:00.371595 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:33:00.371606 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:33:00.371616 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:33:00.371627 | orchestrator | 2025-09-11 00:33:00.371638 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-09-11 00:33:00.371649 | orchestrator | Thursday 11 September 2025 00:32:59 +0000 (0:00:02.511) 0:06:36.547 **** 2025-09-11 00:33:00.371659 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:33:00.371670 | orchestrator | 2025-09-11 00:33:00.371681 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-09-11 00:33:00.371691 | orchestrator | Thursday 11 September 2025 00:32:59 +0000 (0:00:00.094) 0:06:36.641 **** 2025-09-11 00:33:00.371702 | orchestrator | ok: [testbed-manager] 2025-09-11 00:33:00.371713 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:33:00.371723 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:33:00.371734 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:33:00.371751 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:33:23.536907 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:33:23.537022 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:33:23.537039 | orchestrator | 2025-09-11 00:33:23.537053 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-09-11 00:33:23.537065 | orchestrator | Thursday 11 September 2025 00:33:00 +0000 (0:00:00.929) 0:06:37.571 **** 2025-09-11 00:33:23.537077 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:33:23.537088 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:33:23.537099 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:33:23.537110 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:33:23.537120 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:33:23.537131 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:33:23.537141 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:33:23.537152 | orchestrator | 2025-09-11 00:33:23.537163 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-09-11 00:33:23.537174 | orchestrator | Thursday 11 September 2025 00:33:00 +0000 (0:00:00.496) 0:06:38.067 **** 2025-09-11 00:33:23.537186 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:33:23.537199 | orchestrator | 2025-09-11 00:33:23.537210 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-09-11 00:33:23.537222 | orchestrator | Thursday 11 September 2025 00:33:01 +0000 (0:00:00.965) 0:06:39.033 **** 2025-09-11 00:33:23.537233 | orchestrator | ok: [testbed-manager] 2025-09-11 00:33:23.537245 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:33:23.537255 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:33:23.537266 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:33:23.537277 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:33:23.537287 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:33:23.537298 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:33:23.537308 | orchestrator | 2025-09-11 00:33:23.537319 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-09-11 00:33:23.537330 | orchestrator | Thursday 11 September 2025 00:33:02 +0000 (0:00:00.752) 0:06:39.786 **** 2025-09-11 00:33:23.537341 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-09-11 00:33:23.537352 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-09-11 00:33:23.537363 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-09-11 00:33:23.537467 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-09-11 00:33:23.537483 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-09-11 00:33:23.537495 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-09-11 00:33:23.537508 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-09-11 00:33:23.537520 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-09-11 00:33:23.537532 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-09-11 00:33:23.537544 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-09-11 00:33:23.537556 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-09-11 00:33:23.537569 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-09-11 00:33:23.537581 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-09-11 00:33:23.537593 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-09-11 00:33:23.537605 | orchestrator | 2025-09-11 00:33:23.537617 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-09-11 00:33:23.537631 | orchestrator | Thursday 11 September 2025 00:33:04 +0000 (0:00:02.162) 0:06:41.949 **** 2025-09-11 00:33:23.537643 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:33:23.537655 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:33:23.537667 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:33:23.537679 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:33:23.537691 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:33:23.537704 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:33:23.537716 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:33:23.537727 | orchestrator | 2025-09-11 00:33:23.537739 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-09-11 00:33:23.537752 | orchestrator | Thursday 11 September 2025 00:33:05 +0000 (0:00:00.461) 0:06:42.410 **** 2025-09-11 00:33:23.537766 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:33:23.537780 | orchestrator | 2025-09-11 00:33:23.537793 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-09-11 00:33:23.537805 | orchestrator | Thursday 11 September 2025 00:33:06 +0000 (0:00:00.883) 0:06:43.293 **** 2025-09-11 00:33:23.537815 | orchestrator | ok: [testbed-manager] 2025-09-11 00:33:23.537826 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:33:23.537836 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:33:23.537846 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:33:23.537857 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:33:23.537867 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:33:23.537878 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:33:23.537889 | orchestrator | 2025-09-11 00:33:23.537899 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-09-11 00:33:23.537910 | orchestrator | Thursday 11 September 2025 00:33:06 +0000 (0:00:00.756) 0:06:44.050 **** 2025-09-11 00:33:23.537921 | orchestrator | ok: [testbed-manager] 2025-09-11 00:33:23.537931 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:33:23.537942 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:33:23.537952 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:33:23.537962 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:33:23.537973 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:33:23.537983 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:33:23.537994 | orchestrator | 2025-09-11 00:33:23.538004 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-09-11 00:33:23.538092 | orchestrator | Thursday 11 September 2025 00:33:07 +0000 (0:00:00.741) 0:06:44.792 **** 2025-09-11 00:33:23.538108 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:33:23.538119 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:33:23.538129 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:33:23.538140 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:33:23.538167 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:33:23.538178 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:33:23.538189 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:33:23.538200 | orchestrator | 2025-09-11 00:33:23.538211 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-09-11 00:33:23.538221 | orchestrator | Thursday 11 September 2025 00:33:08 +0000 (0:00:00.447) 0:06:45.239 **** 2025-09-11 00:33:23.538232 | orchestrator | ok: [testbed-manager] 2025-09-11 00:33:23.538243 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:33:23.538253 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:33:23.538264 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:33:23.538274 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:33:23.538285 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:33:23.538296 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:33:23.538306 | orchestrator | 2025-09-11 00:33:23.538317 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-09-11 00:33:23.538328 | orchestrator | Thursday 11 September 2025 00:33:09 +0000 (0:00:01.582) 0:06:46.821 **** 2025-09-11 00:33:23.538338 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:33:23.538349 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:33:23.538360 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:33:23.538371 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:33:23.538381 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:33:23.538392 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:33:23.538420 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:33:23.538432 | orchestrator | 2025-09-11 00:33:23.538442 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-09-11 00:33:23.538453 | orchestrator | Thursday 11 September 2025 00:33:10 +0000 (0:00:00.469) 0:06:47.291 **** 2025-09-11 00:33:23.538464 | orchestrator | ok: [testbed-manager] 2025-09-11 00:33:23.538474 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:33:23.538485 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:33:23.538495 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:33:23.538506 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:33:23.538517 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:33:23.538527 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:33:23.538538 | orchestrator | 2025-09-11 00:33:23.538549 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-09-11 00:33:23.538565 | orchestrator | Thursday 11 September 2025 00:33:16 +0000 (0:00:06.475) 0:06:53.767 **** 2025-09-11 00:33:23.538576 | orchestrator | ok: [testbed-manager] 2025-09-11 00:33:23.538587 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:33:23.538597 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:33:23.538608 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:33:23.538618 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:33:23.538629 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:33:23.538639 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:33:23.538650 | orchestrator | 2025-09-11 00:33:23.538661 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-09-11 00:33:23.538672 | orchestrator | Thursday 11 September 2025 00:33:17 +0000 (0:00:01.279) 0:06:55.046 **** 2025-09-11 00:33:23.538682 | orchestrator | ok: [testbed-manager] 2025-09-11 00:33:23.538692 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:33:23.538703 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:33:23.538714 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:33:23.538724 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:33:23.538735 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:33:23.538745 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:33:23.538756 | orchestrator | 2025-09-11 00:33:23.538767 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-09-11 00:33:23.538777 | orchestrator | Thursday 11 September 2025 00:33:19 +0000 (0:00:01.716) 0:06:56.762 **** 2025-09-11 00:33:23.538788 | orchestrator | ok: [testbed-manager] 2025-09-11 00:33:23.538807 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:33:23.538818 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:33:23.538828 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:33:23.538839 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:33:23.538849 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:33:23.538860 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:33:23.538870 | orchestrator | 2025-09-11 00:33:23.538881 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-11 00:33:23.538891 | orchestrator | Thursday 11 September 2025 00:33:21 +0000 (0:00:01.830) 0:06:58.592 **** 2025-09-11 00:33:23.538902 | orchestrator | ok: [testbed-manager] 2025-09-11 00:33:23.538913 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:33:23.538923 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:33:23.538934 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:33:23.538945 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:33:23.538955 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:33:23.538966 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:33:23.538976 | orchestrator | 2025-09-11 00:33:23.538987 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-11 00:33:23.538998 | orchestrator | Thursday 11 September 2025 00:33:22 +0000 (0:00:00.820) 0:06:59.413 **** 2025-09-11 00:33:23.539008 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:33:23.539019 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:33:23.539029 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:33:23.539040 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:33:23.539050 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:33:23.539061 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:33:23.539071 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:33:23.539082 | orchestrator | 2025-09-11 00:33:23.539093 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-09-11 00:33:23.539103 | orchestrator | Thursday 11 September 2025 00:33:23 +0000 (0:00:00.868) 0:07:00.282 **** 2025-09-11 00:33:23.539114 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:33:23.539124 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:33:23.539135 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:33:23.539145 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:33:23.539156 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:33:23.539166 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:33:23.539177 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:33:23.539188 | orchestrator | 2025-09-11 00:33:23.539205 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-09-11 00:33:54.552671 | orchestrator | Thursday 11 September 2025 00:33:23 +0000 (0:00:00.453) 0:07:00.735 **** 2025-09-11 00:33:54.552786 | orchestrator | ok: [testbed-manager] 2025-09-11 00:33:54.552803 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:33:54.552814 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:33:54.552825 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:33:54.552836 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:33:54.552847 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:33:54.552858 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:33:54.552870 | orchestrator | 2025-09-11 00:33:54.552881 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-09-11 00:33:54.552893 | orchestrator | Thursday 11 September 2025 00:33:23 +0000 (0:00:00.456) 0:07:01.192 **** 2025-09-11 00:33:54.552904 | orchestrator | ok: [testbed-manager] 2025-09-11 00:33:54.552915 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:33:54.552925 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:33:54.552936 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:33:54.552947 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:33:54.552957 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:33:54.552968 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:33:54.552979 | orchestrator | 2025-09-11 00:33:54.552990 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-09-11 00:33:54.553001 | orchestrator | Thursday 11 September 2025 00:33:24 +0000 (0:00:00.479) 0:07:01.671 **** 2025-09-11 00:33:54.553039 | orchestrator | ok: [testbed-manager] 2025-09-11 00:33:54.553051 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:33:54.553061 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:33:54.553072 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:33:54.553082 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:33:54.553093 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:33:54.553104 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:33:54.553114 | orchestrator | 2025-09-11 00:33:54.553125 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-09-11 00:33:54.553136 | orchestrator | Thursday 11 September 2025 00:33:24 +0000 (0:00:00.476) 0:07:02.148 **** 2025-09-11 00:33:54.553147 | orchestrator | ok: [testbed-manager] 2025-09-11 00:33:54.553157 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:33:54.553168 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:33:54.553178 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:33:54.553189 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:33:54.553200 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:33:54.553212 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:33:54.553224 | orchestrator | 2025-09-11 00:33:54.553237 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-09-11 00:33:54.553249 | orchestrator | Thursday 11 September 2025 00:33:30 +0000 (0:00:05.745) 0:07:07.894 **** 2025-09-11 00:33:54.553277 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:33:54.553291 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:33:54.553303 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:33:54.553315 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:33:54.553328 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:33:54.553340 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:33:54.553352 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:33:54.553365 | orchestrator | 2025-09-11 00:33:54.553382 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-09-11 00:33:54.553425 | orchestrator | Thursday 11 September 2025 00:33:31 +0000 (0:00:00.478) 0:07:08.372 **** 2025-09-11 00:33:54.553441 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:33:54.553463 | orchestrator | 2025-09-11 00:33:54.553477 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-09-11 00:33:54.553489 | orchestrator | Thursday 11 September 2025 00:33:31 +0000 (0:00:00.740) 0:07:09.113 **** 2025-09-11 00:33:54.553501 | orchestrator | ok: [testbed-manager] 2025-09-11 00:33:54.553513 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:33:54.553525 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:33:54.553538 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:33:54.553551 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:33:54.553563 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:33:54.553574 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:33:54.553584 | orchestrator | 2025-09-11 00:33:54.553595 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-09-11 00:33:54.553606 | orchestrator | Thursday 11 September 2025 00:33:33 +0000 (0:00:01.916) 0:07:11.030 **** 2025-09-11 00:33:54.553616 | orchestrator | ok: [testbed-manager] 2025-09-11 00:33:54.553627 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:33:54.553637 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:33:54.553648 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:33:54.553658 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:33:54.553669 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:33:54.553679 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:33:54.553690 | orchestrator | 2025-09-11 00:33:54.553701 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-09-11 00:33:54.553712 | orchestrator | Thursday 11 September 2025 00:33:34 +0000 (0:00:01.122) 0:07:12.152 **** 2025-09-11 00:33:54.553722 | orchestrator | ok: [testbed-manager] 2025-09-11 00:33:54.553733 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:33:54.553752 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:33:54.553762 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:33:54.553773 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:33:54.553783 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:33:54.553794 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:33:54.553804 | orchestrator | 2025-09-11 00:33:54.553815 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-09-11 00:33:54.553826 | orchestrator | Thursday 11 September 2025 00:33:35 +0000 (0:00:00.797) 0:07:12.950 **** 2025-09-11 00:33:54.553837 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-11 00:33:54.553849 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-11 00:33:54.553860 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-11 00:33:54.553889 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-11 00:33:54.553900 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-11 00:33:54.553911 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-11 00:33:54.553922 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-11 00:33:54.553933 | orchestrator | 2025-09-11 00:33:54.553943 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-09-11 00:33:54.553954 | orchestrator | Thursday 11 September 2025 00:33:37 +0000 (0:00:01.574) 0:07:14.524 **** 2025-09-11 00:33:54.553965 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:33:54.553976 | orchestrator | 2025-09-11 00:33:54.553987 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-09-11 00:33:54.553998 | orchestrator | Thursday 11 September 2025 00:33:38 +0000 (0:00:00.902) 0:07:15.427 **** 2025-09-11 00:33:54.554009 | orchestrator | changed: [testbed-manager] 2025-09-11 00:33:54.554075 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:33:54.554087 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:33:54.554098 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:33:54.554109 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:33:54.554119 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:33:54.554130 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:33:54.554141 | orchestrator | 2025-09-11 00:33:54.554152 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-09-11 00:33:54.554162 | orchestrator | Thursday 11 September 2025 00:33:47 +0000 (0:00:09.049) 0:07:24.476 **** 2025-09-11 00:33:54.554173 | orchestrator | ok: [testbed-manager] 2025-09-11 00:33:54.554189 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:33:54.554201 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:33:54.554211 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:33:54.554222 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:33:54.554233 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:33:54.554243 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:33:54.554253 | orchestrator | 2025-09-11 00:33:54.554264 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-09-11 00:33:54.554275 | orchestrator | Thursday 11 September 2025 00:33:49 +0000 (0:00:01.754) 0:07:26.231 **** 2025-09-11 00:33:54.554286 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:33:54.554296 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:33:54.554314 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:33:54.554325 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:33:54.554335 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:33:54.554346 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:33:54.554356 | orchestrator | 2025-09-11 00:33:54.554367 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-09-11 00:33:54.554378 | orchestrator | Thursday 11 September 2025 00:33:50 +0000 (0:00:01.153) 0:07:27.384 **** 2025-09-11 00:33:54.554388 | orchestrator | changed: [testbed-manager] 2025-09-11 00:33:54.554430 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:33:54.554441 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:33:54.554452 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:33:54.554463 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:33:54.554473 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:33:54.554484 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:33:54.554494 | orchestrator | 2025-09-11 00:33:54.554505 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-09-11 00:33:54.554516 | orchestrator | 2025-09-11 00:33:54.554526 | orchestrator | TASK [Include hardening role] ************************************************** 2025-09-11 00:33:54.554537 | orchestrator | Thursday 11 September 2025 00:33:51 +0000 (0:00:01.175) 0:07:28.559 **** 2025-09-11 00:33:54.554548 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:33:54.554559 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:33:54.554569 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:33:54.554580 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:33:54.554591 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:33:54.554601 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:33:54.554612 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:33:54.554622 | orchestrator | 2025-09-11 00:33:54.554633 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-09-11 00:33:54.554644 | orchestrator | 2025-09-11 00:33:54.554654 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-09-11 00:33:54.554665 | orchestrator | Thursday 11 September 2025 00:33:51 +0000 (0:00:00.473) 0:07:29.032 **** 2025-09-11 00:33:54.554675 | orchestrator | changed: [testbed-manager] 2025-09-11 00:33:54.554686 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:33:54.554696 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:33:54.554707 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:33:54.554718 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:33:54.554728 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:33:54.554739 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:33:54.554749 | orchestrator | 2025-09-11 00:33:54.554760 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-09-11 00:33:54.554771 | orchestrator | Thursday 11 September 2025 00:33:53 +0000 (0:00:01.190) 0:07:30.223 **** 2025-09-11 00:33:54.554781 | orchestrator | ok: [testbed-manager] 2025-09-11 00:33:54.554792 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:33:54.554803 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:33:54.554813 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:33:54.554824 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:33:54.554834 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:33:54.554845 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:33:54.554855 | orchestrator | 2025-09-11 00:33:54.554866 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-09-11 00:33:54.554885 | orchestrator | Thursday 11 September 2025 00:33:54 +0000 (0:00:01.529) 0:07:31.753 **** 2025-09-11 00:34:17.385908 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:34:17.386093 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:34:17.386115 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:34:17.386128 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:34:17.386139 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:34:17.386150 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:34:17.386161 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:34:17.386173 | orchestrator | 2025-09-11 00:34:17.386205 | orchestrator | TASK [Include smartd role] ***************************************************** 2025-09-11 00:34:17.386218 | orchestrator | Thursday 11 September 2025 00:33:54 +0000 (0:00:00.443) 0:07:32.196 **** 2025-09-11 00:34:17.386229 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:34:17.386241 | orchestrator | 2025-09-11 00:34:17.386252 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-09-11 00:34:17.386262 | orchestrator | Thursday 11 September 2025 00:33:55 +0000 (0:00:00.889) 0:07:33.085 **** 2025-09-11 00:34:17.386275 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:34:17.386288 | orchestrator | 2025-09-11 00:34:17.386299 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-09-11 00:34:17.386310 | orchestrator | Thursday 11 September 2025 00:33:56 +0000 (0:00:00.748) 0:07:33.834 **** 2025-09-11 00:34:17.386321 | orchestrator | changed: [testbed-manager] 2025-09-11 00:34:17.386331 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:34:17.386342 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:34:17.386352 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:34:17.386389 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:34:17.386400 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:34:17.386410 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:34:17.386421 | orchestrator | 2025-09-11 00:34:17.386432 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-09-11 00:34:17.386443 | orchestrator | Thursday 11 September 2025 00:34:05 +0000 (0:00:08.514) 0:07:42.349 **** 2025-09-11 00:34:17.386453 | orchestrator | changed: [testbed-manager] 2025-09-11 00:34:17.386464 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:34:17.386475 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:34:17.386485 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:34:17.386496 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:34:17.386506 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:34:17.386517 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:34:17.386528 | orchestrator | 2025-09-11 00:34:17.386538 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-09-11 00:34:17.386549 | orchestrator | Thursday 11 September 2025 00:34:05 +0000 (0:00:00.819) 0:07:43.169 **** 2025-09-11 00:34:17.386560 | orchestrator | changed: [testbed-manager] 2025-09-11 00:34:17.386570 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:34:17.386581 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:34:17.386592 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:34:17.386602 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:34:17.386613 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:34:17.386623 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:34:17.386634 | orchestrator | 2025-09-11 00:34:17.386644 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-09-11 00:34:17.386655 | orchestrator | Thursday 11 September 2025 00:34:07 +0000 (0:00:01.441) 0:07:44.610 **** 2025-09-11 00:34:17.386666 | orchestrator | changed: [testbed-manager] 2025-09-11 00:34:17.386676 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:34:17.386687 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:34:17.386698 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:34:17.386708 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:34:17.386719 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:34:17.386729 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:34:17.386740 | orchestrator | 2025-09-11 00:34:17.386751 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-09-11 00:34:17.386762 | orchestrator | Thursday 11 September 2025 00:34:09 +0000 (0:00:01.679) 0:07:46.289 **** 2025-09-11 00:34:17.386772 | orchestrator | changed: [testbed-manager] 2025-09-11 00:34:17.386790 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:34:17.386801 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:34:17.386812 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:34:17.386822 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:34:17.386833 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:34:17.386843 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:34:17.386854 | orchestrator | 2025-09-11 00:34:17.386864 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-09-11 00:34:17.386875 | orchestrator | Thursday 11 September 2025 00:34:10 +0000 (0:00:01.274) 0:07:47.563 **** 2025-09-11 00:34:17.386885 | orchestrator | changed: [testbed-manager] 2025-09-11 00:34:17.386896 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:34:17.386906 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:34:17.386917 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:34:17.386927 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:34:17.386937 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:34:17.386948 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:34:17.386958 | orchestrator | 2025-09-11 00:34:17.386969 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-09-11 00:34:17.386980 | orchestrator | 2025-09-11 00:34:17.386990 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-09-11 00:34:17.387001 | orchestrator | Thursday 11 September 2025 00:34:11 +0000 (0:00:01.343) 0:07:48.906 **** 2025-09-11 00:34:17.387011 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:34:17.387056 | orchestrator | 2025-09-11 00:34:17.387068 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-11 00:34:17.387096 | orchestrator | Thursday 11 September 2025 00:34:12 +0000 (0:00:00.754) 0:07:49.661 **** 2025-09-11 00:34:17.387108 | orchestrator | ok: [testbed-manager] 2025-09-11 00:34:17.387120 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:34:17.387130 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:34:17.387141 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:34:17.387152 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:34:17.387163 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:34:17.387173 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:34:17.387184 | orchestrator | 2025-09-11 00:34:17.387195 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-11 00:34:17.387206 | orchestrator | Thursday 11 September 2025 00:34:13 +0000 (0:00:00.877) 0:07:50.539 **** 2025-09-11 00:34:17.387217 | orchestrator | changed: [testbed-manager] 2025-09-11 00:34:17.387227 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:34:17.387238 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:34:17.387248 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:34:17.387259 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:34:17.387269 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:34:17.387280 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:34:17.387291 | orchestrator | 2025-09-11 00:34:17.387301 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-09-11 00:34:17.387312 | orchestrator | Thursday 11 September 2025 00:34:14 +0000 (0:00:01.251) 0:07:51.791 **** 2025-09-11 00:34:17.387323 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:34:17.387334 | orchestrator | 2025-09-11 00:34:17.387345 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-11 00:34:17.387355 | orchestrator | Thursday 11 September 2025 00:34:15 +0000 (0:00:00.799) 0:07:52.590 **** 2025-09-11 00:34:17.387382 | orchestrator | ok: [testbed-manager] 2025-09-11 00:34:17.387393 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:34:17.387404 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:34:17.387414 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:34:17.387425 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:34:17.387442 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:34:17.387453 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:34:17.387464 | orchestrator | 2025-09-11 00:34:17.387475 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-11 00:34:17.387486 | orchestrator | Thursday 11 September 2025 00:34:16 +0000 (0:00:00.777) 0:07:53.368 **** 2025-09-11 00:34:17.387497 | orchestrator | changed: [testbed-manager] 2025-09-11 00:34:17.387512 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:34:17.387523 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:34:17.387533 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:34:17.387544 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:34:17.387555 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:34:17.387565 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:34:17.387576 | orchestrator | 2025-09-11 00:34:17.387587 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:34:17.387599 | orchestrator | testbed-manager : ok=163  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-09-11 00:34:17.387610 | orchestrator | testbed-node-0 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-11 00:34:17.387621 | orchestrator | testbed-node-1 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-11 00:34:17.387632 | orchestrator | testbed-node-2 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-11 00:34:17.387643 | orchestrator | testbed-node-3 : ok=170  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-09-11 00:34:17.387654 | orchestrator | testbed-node-4 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-11 00:34:17.387664 | orchestrator | testbed-node-5 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-11 00:34:17.387675 | orchestrator | 2025-09-11 00:34:17.387686 | orchestrator | 2025-09-11 00:34:17.387697 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:34:17.387708 | orchestrator | Thursday 11 September 2025 00:34:17 +0000 (0:00:01.204) 0:07:54.572 **** 2025-09-11 00:34:17.387719 | orchestrator | =============================================================================== 2025-09-11 00:34:17.387729 | orchestrator | osism.commons.packages : Install required packages --------------------- 77.50s 2025-09-11 00:34:17.387740 | orchestrator | osism.commons.packages : Download required packages -------------------- 39.65s 2025-09-11 00:34:17.387751 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.67s 2025-09-11 00:34:17.387761 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.92s 2025-09-11 00:34:17.387772 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 10.81s 2025-09-11 00:34:17.387783 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.61s 2025-09-11 00:34:17.387793 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required --- 9.79s 2025-09-11 00:34:17.387804 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.50s 2025-09-11 00:34:17.387815 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.05s 2025-09-11 00:34:17.387826 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.03s 2025-09-11 00:34:17.387843 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.51s 2025-09-11 00:34:17.712766 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.47s 2025-09-11 00:34:17.712879 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.18s 2025-09-11 00:34:17.712930 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.06s 2025-09-11 00:34:17.712948 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.95s 2025-09-11 00:34:17.712963 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.48s 2025-09-11 00:34:17.712978 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 6.48s 2025-09-11 00:34:17.712994 | orchestrator | osism.commons.packages : Upgrade packages ------------------------------- 6.44s 2025-09-11 00:34:17.713011 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.85s 2025-09-11 00:34:17.713027 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.75s 2025-09-11 00:34:17.966690 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-09-11 00:34:17.966789 | orchestrator | + osism apply network 2025-09-11 00:34:30.369077 | orchestrator | 2025-09-11 00:34:30 | INFO  | Task 35ae8847-471d-45c7-bb89-bc523c84fac2 (network) was prepared for execution. 2025-09-11 00:34:30.369186 | orchestrator | 2025-09-11 00:34:30 | INFO  | It takes a moment until task 35ae8847-471d-45c7-bb89-bc523c84fac2 (network) has been started and output is visible here. 2025-09-11 00:34:57.529281 | orchestrator | 2025-09-11 00:34:57.529475 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-09-11 00:34:57.529503 | orchestrator | 2025-09-11 00:34:57.529524 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-09-11 00:34:57.529543 | orchestrator | Thursday 11 September 2025 00:34:34 +0000 (0:00:00.275) 0:00:00.275 **** 2025-09-11 00:34:57.529564 | orchestrator | ok: [testbed-manager] 2025-09-11 00:34:57.529586 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:34:57.529605 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:34:57.529627 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:34:57.529640 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:34:57.529651 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:34:57.529662 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:34:57.529673 | orchestrator | 2025-09-11 00:34:57.529684 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-09-11 00:34:57.529695 | orchestrator | Thursday 11 September 2025 00:34:35 +0000 (0:00:00.647) 0:00:00.922 **** 2025-09-11 00:34:57.529709 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:34:57.529723 | orchestrator | 2025-09-11 00:34:57.529734 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-09-11 00:34:57.529745 | orchestrator | Thursday 11 September 2025 00:34:36 +0000 (0:00:01.118) 0:00:02.041 **** 2025-09-11 00:34:57.529756 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:34:57.529766 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:34:57.529777 | orchestrator | ok: [testbed-manager] 2025-09-11 00:34:57.529788 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:34:57.529799 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:34:57.529812 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:34:57.529824 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:34:57.529837 | orchestrator | 2025-09-11 00:34:57.529851 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-09-11 00:34:57.529864 | orchestrator | Thursday 11 September 2025 00:34:37 +0000 (0:00:01.572) 0:00:03.613 **** 2025-09-11 00:34:57.529876 | orchestrator | ok: [testbed-manager] 2025-09-11 00:34:57.529889 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:34:57.529901 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:34:57.529914 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:34:57.529927 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:34:57.529940 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:34:57.529952 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:34:57.529964 | orchestrator | 2025-09-11 00:34:57.529976 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-09-11 00:34:57.530083 | orchestrator | Thursday 11 September 2025 00:34:39 +0000 (0:00:01.508) 0:00:05.121 **** 2025-09-11 00:34:57.530098 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-09-11 00:34:57.530111 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-09-11 00:34:57.530124 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-09-11 00:34:57.530137 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-09-11 00:34:57.530150 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-09-11 00:34:57.530162 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-09-11 00:34:57.530173 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-09-11 00:34:57.530184 | orchestrator | 2025-09-11 00:34:57.530195 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-09-11 00:34:57.530205 | orchestrator | Thursday 11 September 2025 00:34:40 +0000 (0:00:00.888) 0:00:06.009 **** 2025-09-11 00:34:57.530216 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-11 00:34:57.530228 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-11 00:34:57.530238 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-11 00:34:57.530249 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-11 00:34:57.530260 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-11 00:34:57.530270 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-11 00:34:57.530281 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-11 00:34:57.530292 | orchestrator | 2025-09-11 00:34:57.530302 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-09-11 00:34:57.530338 | orchestrator | Thursday 11 September 2025 00:34:43 +0000 (0:00:03.379) 0:00:09.389 **** 2025-09-11 00:34:57.530349 | orchestrator | changed: [testbed-manager] 2025-09-11 00:34:57.530360 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:34:57.530371 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:34:57.530382 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:34:57.530392 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:34:57.530403 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:34:57.530414 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:34:57.530424 | orchestrator | 2025-09-11 00:34:57.530435 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-09-11 00:34:57.530446 | orchestrator | Thursday 11 September 2025 00:34:45 +0000 (0:00:01.431) 0:00:10.821 **** 2025-09-11 00:34:57.530457 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-11 00:34:57.530467 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-11 00:34:57.530478 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-11 00:34:57.530489 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-11 00:34:57.530499 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-11 00:34:57.530510 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-11 00:34:57.530521 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-11 00:34:57.530531 | orchestrator | 2025-09-11 00:34:57.530542 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-09-11 00:34:57.530553 | orchestrator | Thursday 11 September 2025 00:34:46 +0000 (0:00:01.807) 0:00:12.628 **** 2025-09-11 00:34:57.530563 | orchestrator | ok: [testbed-manager] 2025-09-11 00:34:57.530574 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:34:57.530585 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:34:57.530596 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:34:57.530606 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:34:57.530617 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:34:57.530628 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:34:57.530638 | orchestrator | 2025-09-11 00:34:57.530649 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-09-11 00:34:57.530679 | orchestrator | Thursday 11 September 2025 00:34:47 +0000 (0:00:01.044) 0:00:13.673 **** 2025-09-11 00:34:57.530691 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:34:57.530701 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:34:57.530712 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:34:57.530733 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:34:57.530744 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:34:57.530754 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:34:57.530765 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:34:57.530776 | orchestrator | 2025-09-11 00:34:57.530787 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-09-11 00:34:57.530812 | orchestrator | Thursday 11 September 2025 00:34:48 +0000 (0:00:00.607) 0:00:14.280 **** 2025-09-11 00:34:57.530823 | orchestrator | ok: [testbed-manager] 2025-09-11 00:34:57.530834 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:34:57.530845 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:34:57.530855 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:34:57.530866 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:34:57.530877 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:34:57.530887 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:34:57.530898 | orchestrator | 2025-09-11 00:34:57.530909 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-09-11 00:34:57.530919 | orchestrator | Thursday 11 September 2025 00:34:50 +0000 (0:00:02.200) 0:00:16.481 **** 2025-09-11 00:34:57.530930 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:34:57.530941 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:34:57.530951 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:34:57.530962 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:34:57.530972 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:34:57.530983 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:34:57.530995 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-09-11 00:34:57.531007 | orchestrator | 2025-09-11 00:34:57.531018 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-09-11 00:34:57.531028 | orchestrator | Thursday 11 September 2025 00:34:51 +0000 (0:00:00.852) 0:00:17.333 **** 2025-09-11 00:34:57.531039 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:34:57.531050 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:34:57.531060 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:34:57.531071 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:34:57.531081 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:34:57.531092 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:34:57.531103 | orchestrator | ok: [testbed-manager] 2025-09-11 00:34:57.531113 | orchestrator | 2025-09-11 00:34:57.531124 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-09-11 00:34:57.531135 | orchestrator | Thursday 11 September 2025 00:34:53 +0000 (0:00:01.984) 0:00:19.318 **** 2025-09-11 00:34:57.531146 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:34:57.531159 | orchestrator | 2025-09-11 00:34:57.531170 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-11 00:34:57.531180 | orchestrator | Thursday 11 September 2025 00:34:54 +0000 (0:00:01.201) 0:00:20.519 **** 2025-09-11 00:34:57.531191 | orchestrator | ok: [testbed-manager] 2025-09-11 00:34:57.531202 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:34:57.531212 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:34:57.531223 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:34:57.531234 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:34:57.531244 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:34:57.531255 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:34:57.531266 | orchestrator | 2025-09-11 00:34:57.531277 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-09-11 00:34:57.531287 | orchestrator | Thursday 11 September 2025 00:34:55 +0000 (0:00:00.915) 0:00:21.435 **** 2025-09-11 00:34:57.531298 | orchestrator | ok: [testbed-manager] 2025-09-11 00:34:57.531326 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:34:57.531338 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:34:57.531358 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:34:57.531369 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:34:57.531380 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:34:57.531391 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:34:57.531401 | orchestrator | 2025-09-11 00:34:57.531412 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-11 00:34:57.531423 | orchestrator | Thursday 11 September 2025 00:34:56 +0000 (0:00:00.730) 0:00:22.165 **** 2025-09-11 00:34:57.531434 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-09-11 00:34:57.531444 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-09-11 00:34:57.531455 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-09-11 00:34:57.531466 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-09-11 00:34:57.531476 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-11 00:34:57.531487 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-09-11 00:34:57.531498 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-11 00:34:57.531508 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-09-11 00:34:57.531519 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-11 00:34:57.531530 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-09-11 00:34:57.531541 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-11 00:34:57.531551 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-11 00:34:57.531562 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-11 00:34:57.531573 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-11 00:34:57.531584 | orchestrator | 2025-09-11 00:34:57.531602 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-09-11 00:35:12.144948 | orchestrator | Thursday 11 September 2025 00:34:57 +0000 (0:00:01.159) 0:00:23.325 **** 2025-09-11 00:35:12.145037 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:35:12.145053 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:35:12.145064 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:35:12.145076 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:35:12.145086 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:35:12.145097 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:35:12.145109 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:35:12.145120 | orchestrator | 2025-09-11 00:35:12.145145 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-09-11 00:35:12.145156 | orchestrator | Thursday 11 September 2025 00:34:58 +0000 (0:00:00.585) 0:00:23.911 **** 2025-09-11 00:35:12.145168 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-0, testbed-node-4, testbed-node-5 2025-09-11 00:35:12.145181 | orchestrator | 2025-09-11 00:35:12.145192 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-09-11 00:35:12.145203 | orchestrator | Thursday 11 September 2025 00:35:02 +0000 (0:00:04.222) 0:00:28.133 **** 2025-09-11 00:35:12.145215 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-11 00:35:12.145233 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-11 00:35:12.145266 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-11 00:35:12.145278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-11 00:35:12.145336 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-11 00:35:12.145349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-11 00:35:12.145360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-11 00:35:12.145371 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-11 00:35:12.145382 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-11 00:35:12.145393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-11 00:35:12.145404 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-11 00:35:12.145431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-11 00:35:12.145443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-11 00:35:12.145459 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-11 00:35:12.145471 | orchestrator | 2025-09-11 00:35:12.145482 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-09-11 00:35:12.145493 | orchestrator | Thursday 11 September 2025 00:35:07 +0000 (0:00:04.940) 0:00:33.074 **** 2025-09-11 00:35:12.145504 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-11 00:35:12.145527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-11 00:35:12.145540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-11 00:35:12.145553 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-11 00:35:12.145567 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-11 00:35:12.145580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-11 00:35:12.145592 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-11 00:35:12.145605 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-11 00:35:12.145617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-11 00:35:12.145630 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-11 00:35:12.145643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-11 00:35:12.145656 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-11 00:35:12.145679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-11 00:35:17.111657 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-11 00:35:17.111759 | orchestrator | 2025-09-11 00:35:17.111776 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-09-11 00:35:17.111788 | orchestrator | Thursday 11 September 2025 00:35:12 +0000 (0:00:04.869) 0:00:37.943 **** 2025-09-11 00:35:17.111823 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:35:17.111835 | orchestrator | 2025-09-11 00:35:17.111847 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-11 00:35:17.111857 | orchestrator | Thursday 11 September 2025 00:35:13 +0000 (0:00:01.096) 0:00:39.040 **** 2025-09-11 00:35:17.111868 | orchestrator | ok: [testbed-manager] 2025-09-11 00:35:17.111880 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:35:17.111891 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:35:17.111901 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:35:17.111912 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:35:17.111922 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:35:17.111933 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:35:17.111944 | orchestrator | 2025-09-11 00:35:17.111955 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-11 00:35:17.111965 | orchestrator | Thursday 11 September 2025 00:35:14 +0000 (0:00:00.974) 0:00:40.014 **** 2025-09-11 00:35:17.111976 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-11 00:35:17.111988 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-11 00:35:17.111998 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-11 00:35:17.112009 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-11 00:35:17.112019 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-11 00:35:17.112030 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-11 00:35:17.112040 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-11 00:35:17.112065 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-11 00:35:17.112077 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:35:17.112088 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-11 00:35:17.112099 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-11 00:35:17.112109 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-11 00:35:17.112120 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-11 00:35:17.112130 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:35:17.112141 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-11 00:35:17.112152 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-11 00:35:17.112162 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-11 00:35:17.112173 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-11 00:35:17.112183 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:35:17.112194 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-11 00:35:17.112205 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-11 00:35:17.112216 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-11 00:35:17.112228 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-11 00:35:17.112240 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:35:17.112253 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-11 00:35:17.112265 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-11 00:35:17.112278 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-11 00:35:17.112321 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-11 00:35:17.112334 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:35:17.112346 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:35:17.112359 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-11 00:35:17.112371 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-11 00:35:17.112384 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-11 00:35:17.112396 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-11 00:35:17.112408 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:35:17.112420 | orchestrator | 2025-09-11 00:35:17.112432 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-09-11 00:35:17.112461 | orchestrator | Thursday 11 September 2025 00:35:15 +0000 (0:00:01.640) 0:00:41.655 **** 2025-09-11 00:35:17.112474 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:35:17.112487 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:35:17.112499 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:35:17.112511 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:35:17.112523 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:35:17.112535 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:35:17.112552 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:35:17.112564 | orchestrator | 2025-09-11 00:35:17.112576 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-09-11 00:35:17.112587 | orchestrator | Thursday 11 September 2025 00:35:16 +0000 (0:00:00.541) 0:00:42.196 **** 2025-09-11 00:35:17.112597 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:35:17.112608 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:35:17.112618 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:35:17.112629 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:35:17.112639 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:35:17.112650 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:35:17.112660 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:35:17.112671 | orchestrator | 2025-09-11 00:35:17.112681 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:35:17.112693 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-11 00:35:17.112704 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-11 00:35:17.112715 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-11 00:35:17.112725 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-11 00:35:17.112736 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-11 00:35:17.112746 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-11 00:35:17.112757 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-11 00:35:17.112767 | orchestrator | 2025-09-11 00:35:17.112778 | orchestrator | 2025-09-11 00:35:17.112789 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:35:17.112800 | orchestrator | Thursday 11 September 2025 00:35:16 +0000 (0:00:00.526) 0:00:42.722 **** 2025-09-11 00:35:17.112810 | orchestrator | =============================================================================== 2025-09-11 00:35:17.112827 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 4.94s 2025-09-11 00:35:17.112838 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 4.87s 2025-09-11 00:35:17.112849 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.22s 2025-09-11 00:35:17.112859 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.38s 2025-09-11 00:35:17.112869 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.20s 2025-09-11 00:35:17.112880 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.98s 2025-09-11 00:35:17.112891 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.81s 2025-09-11 00:35:17.112901 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.64s 2025-09-11 00:35:17.112912 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.57s 2025-09-11 00:35:17.112922 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.51s 2025-09-11 00:35:17.112932 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.43s 2025-09-11 00:35:17.112943 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.20s 2025-09-11 00:35:17.112954 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.16s 2025-09-11 00:35:17.112964 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.12s 2025-09-11 00:35:17.112975 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.10s 2025-09-11 00:35:17.112985 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.04s 2025-09-11 00:35:17.112996 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.97s 2025-09-11 00:35:17.113006 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.92s 2025-09-11 00:35:17.113017 | orchestrator | osism.commons.network : Create required directories --------------------- 0.89s 2025-09-11 00:35:17.113028 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.85s 2025-09-11 00:35:17.271364 | orchestrator | + osism apply wireguard 2025-09-11 00:35:29.115965 | orchestrator | 2025-09-11 00:35:29 | INFO  | Task b88005aa-5677-4fe3-905c-a67da1aea595 (wireguard) was prepared for execution. 2025-09-11 00:35:29.116062 | orchestrator | 2025-09-11 00:35:29 | INFO  | It takes a moment until task b88005aa-5677-4fe3-905c-a67da1aea595 (wireguard) has been started and output is visible here. 2025-09-11 00:35:46.366226 | orchestrator | 2025-09-11 00:35:46.366406 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-09-11 00:35:46.366437 | orchestrator | 2025-09-11 00:35:46.366458 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-09-11 00:35:46.366502 | orchestrator | Thursday 11 September 2025 00:35:32 +0000 (0:00:00.166) 0:00:00.166 **** 2025-09-11 00:35:46.366525 | orchestrator | ok: [testbed-manager] 2025-09-11 00:35:46.366545 | orchestrator | 2025-09-11 00:35:46.366566 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-09-11 00:35:46.366586 | orchestrator | Thursday 11 September 2025 00:35:33 +0000 (0:00:01.157) 0:00:01.324 **** 2025-09-11 00:35:46.366606 | orchestrator | changed: [testbed-manager] 2025-09-11 00:35:46.366626 | orchestrator | 2025-09-11 00:35:46.366646 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-09-11 00:35:46.366666 | orchestrator | Thursday 11 September 2025 00:35:39 +0000 (0:00:05.311) 0:00:06.635 **** 2025-09-11 00:35:46.366686 | orchestrator | changed: [testbed-manager] 2025-09-11 00:35:46.366705 | orchestrator | 2025-09-11 00:35:46.366724 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-09-11 00:35:46.366744 | orchestrator | Thursday 11 September 2025 00:35:39 +0000 (0:00:00.540) 0:00:07.176 **** 2025-09-11 00:35:46.366763 | orchestrator | changed: [testbed-manager] 2025-09-11 00:35:46.366819 | orchestrator | 2025-09-11 00:35:46.366841 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-09-11 00:35:46.366861 | orchestrator | Thursday 11 September 2025 00:35:40 +0000 (0:00:00.409) 0:00:07.586 **** 2025-09-11 00:35:46.366874 | orchestrator | ok: [testbed-manager] 2025-09-11 00:35:46.366885 | orchestrator | 2025-09-11 00:35:46.366896 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-09-11 00:35:46.366906 | orchestrator | Thursday 11 September 2025 00:35:40 +0000 (0:00:00.518) 0:00:08.105 **** 2025-09-11 00:35:46.366917 | orchestrator | ok: [testbed-manager] 2025-09-11 00:35:46.366928 | orchestrator | 2025-09-11 00:35:46.366939 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-09-11 00:35:46.366949 | orchestrator | Thursday 11 September 2025 00:35:41 +0000 (0:00:00.496) 0:00:08.601 **** 2025-09-11 00:35:46.366960 | orchestrator | ok: [testbed-manager] 2025-09-11 00:35:46.366971 | orchestrator | 2025-09-11 00:35:46.366981 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-09-11 00:35:46.366992 | orchestrator | Thursday 11 September 2025 00:35:41 +0000 (0:00:00.407) 0:00:09.009 **** 2025-09-11 00:35:46.367003 | orchestrator | changed: [testbed-manager] 2025-09-11 00:35:46.367014 | orchestrator | 2025-09-11 00:35:46.367024 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-09-11 00:35:46.367035 | orchestrator | Thursday 11 September 2025 00:35:42 +0000 (0:00:01.114) 0:00:10.123 **** 2025-09-11 00:35:46.367046 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-11 00:35:46.367057 | orchestrator | changed: [testbed-manager] 2025-09-11 00:35:46.367067 | orchestrator | 2025-09-11 00:35:46.367078 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-09-11 00:35:46.367089 | orchestrator | Thursday 11 September 2025 00:35:43 +0000 (0:00:00.873) 0:00:10.997 **** 2025-09-11 00:35:46.367100 | orchestrator | changed: [testbed-manager] 2025-09-11 00:35:46.367110 | orchestrator | 2025-09-11 00:35:46.367121 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-09-11 00:35:46.367132 | orchestrator | Thursday 11 September 2025 00:35:45 +0000 (0:00:01.576) 0:00:12.574 **** 2025-09-11 00:35:46.367142 | orchestrator | changed: [testbed-manager] 2025-09-11 00:35:46.367153 | orchestrator | 2025-09-11 00:35:46.367164 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:35:46.367175 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:35:46.367187 | orchestrator | 2025-09-11 00:35:46.367198 | orchestrator | 2025-09-11 00:35:46.367209 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:35:46.367220 | orchestrator | Thursday 11 September 2025 00:35:46 +0000 (0:00:00.930) 0:00:13.505 **** 2025-09-11 00:35:46.367231 | orchestrator | =============================================================================== 2025-09-11 00:35:46.367241 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.31s 2025-09-11 00:35:46.367252 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.58s 2025-09-11 00:35:46.367306 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.16s 2025-09-11 00:35:46.367318 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.11s 2025-09-11 00:35:46.367329 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.93s 2025-09-11 00:35:46.367339 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.87s 2025-09-11 00:35:46.367350 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.54s 2025-09-11 00:35:46.367361 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.52s 2025-09-11 00:35:46.367372 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.50s 2025-09-11 00:35:46.367382 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.41s 2025-09-11 00:35:46.367437 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.41s 2025-09-11 00:35:46.614950 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-09-11 00:35:46.651967 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-09-11 00:35:46.652026 | orchestrator | Dload Upload Total Spent Left Speed 2025-09-11 00:35:46.725322 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 190 0 --:--:-- --:--:-- --:--:-- 191 2025-09-11 00:35:46.741804 | orchestrator | + osism apply --environment custom workarounds 2025-09-11 00:35:48.541157 | orchestrator | 2025-09-11 00:35:48 | INFO  | Trying to run play workarounds in environment custom 2025-09-11 00:35:58.649142 | orchestrator | 2025-09-11 00:35:58 | INFO  | Task 35d86cf5-cd5d-4bf9-bac0-497fef7acedf (workarounds) was prepared for execution. 2025-09-11 00:35:58.649230 | orchestrator | 2025-09-11 00:35:58 | INFO  | It takes a moment until task 35d86cf5-cd5d-4bf9-bac0-497fef7acedf (workarounds) has been started and output is visible here. 2025-09-11 00:36:21.812454 | orchestrator | 2025-09-11 00:36:21.812546 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-11 00:36:21.812557 | orchestrator | 2025-09-11 00:36:21.812565 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-09-11 00:36:21.812573 | orchestrator | Thursday 11 September 2025 00:36:02 +0000 (0:00:00.110) 0:00:00.110 **** 2025-09-11 00:36:21.812581 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-09-11 00:36:21.812589 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-09-11 00:36:21.812596 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-09-11 00:36:21.812604 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-09-11 00:36:21.812611 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-09-11 00:36:21.812618 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-09-11 00:36:21.812625 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-09-11 00:36:21.812632 | orchestrator | 2025-09-11 00:36:21.812639 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-09-11 00:36:21.812646 | orchestrator | 2025-09-11 00:36:21.812654 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-11 00:36:21.812661 | orchestrator | Thursday 11 September 2025 00:36:02 +0000 (0:00:00.552) 0:00:00.662 **** 2025-09-11 00:36:21.812668 | orchestrator | ok: [testbed-manager] 2025-09-11 00:36:21.812677 | orchestrator | 2025-09-11 00:36:21.812684 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-09-11 00:36:21.812692 | orchestrator | 2025-09-11 00:36:21.812699 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-11 00:36:21.812706 | orchestrator | Thursday 11 September 2025 00:36:04 +0000 (0:00:02.025) 0:00:02.687 **** 2025-09-11 00:36:21.812713 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:36:21.812721 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:36:21.812728 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:36:21.812735 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:36:21.812742 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:36:21.812749 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:36:21.812757 | orchestrator | 2025-09-11 00:36:21.812765 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-09-11 00:36:21.812772 | orchestrator | 2025-09-11 00:36:21.812779 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-09-11 00:36:21.812787 | orchestrator | Thursday 11 September 2025 00:36:06 +0000 (0:00:01.724) 0:00:04.412 **** 2025-09-11 00:36:21.812795 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-11 00:36:21.812803 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-11 00:36:21.812829 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-11 00:36:21.812837 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-11 00:36:21.812844 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-11 00:36:21.812851 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-11 00:36:21.812858 | orchestrator | 2025-09-11 00:36:21.812866 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-09-11 00:36:21.812873 | orchestrator | Thursday 11 September 2025 00:36:07 +0000 (0:00:01.428) 0:00:05.841 **** 2025-09-11 00:36:21.812880 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:36:21.812888 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:36:21.812895 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:36:21.812902 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:36:21.812909 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:36:21.812916 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:36:21.812923 | orchestrator | 2025-09-11 00:36:21.812930 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-09-11 00:36:21.812938 | orchestrator | Thursday 11 September 2025 00:36:11 +0000 (0:00:03.566) 0:00:09.407 **** 2025-09-11 00:36:21.812945 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:36:21.812952 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:36:21.812959 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:36:21.812966 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:36:21.812973 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:36:21.812980 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:36:21.812987 | orchestrator | 2025-09-11 00:36:21.812994 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-09-11 00:36:21.813001 | orchestrator | 2025-09-11 00:36:21.813009 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-09-11 00:36:21.813017 | orchestrator | Thursday 11 September 2025 00:36:11 +0000 (0:00:00.646) 0:00:10.053 **** 2025-09-11 00:36:21.813026 | orchestrator | changed: [testbed-manager] 2025-09-11 00:36:21.813034 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:36:21.813042 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:36:21.813051 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:36:21.813060 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:36:21.813068 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:36:21.813076 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:36:21.813084 | orchestrator | 2025-09-11 00:36:21.813092 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-09-11 00:36:21.813100 | orchestrator | Thursday 11 September 2025 00:36:13 +0000 (0:00:01.795) 0:00:11.849 **** 2025-09-11 00:36:21.813121 | orchestrator | changed: [testbed-manager] 2025-09-11 00:36:21.813130 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:36:21.813139 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:36:21.813147 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:36:21.813156 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:36:21.813164 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:36:21.813185 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:36:21.813194 | orchestrator | 2025-09-11 00:36:21.813203 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-09-11 00:36:21.813211 | orchestrator | Thursday 11 September 2025 00:36:15 +0000 (0:00:01.575) 0:00:13.425 **** 2025-09-11 00:36:21.813220 | orchestrator | ok: [testbed-manager] 2025-09-11 00:36:21.813252 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:36:21.813261 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:36:21.813270 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:36:21.813278 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:36:21.813286 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:36:21.813301 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:36:21.813310 | orchestrator | 2025-09-11 00:36:21.813318 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-09-11 00:36:21.813327 | orchestrator | Thursday 11 September 2025 00:36:16 +0000 (0:00:01.462) 0:00:14.887 **** 2025-09-11 00:36:21.813335 | orchestrator | changed: [testbed-manager] 2025-09-11 00:36:21.813344 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:36:21.813352 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:36:21.813361 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:36:21.813369 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:36:21.813377 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:36:21.813384 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:36:21.813391 | orchestrator | 2025-09-11 00:36:21.813399 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-09-11 00:36:21.813406 | orchestrator | Thursday 11 September 2025 00:36:18 +0000 (0:00:01.711) 0:00:16.598 **** 2025-09-11 00:36:21.813413 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:36:21.813420 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:36:21.813428 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:36:21.813435 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:36:21.813442 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:36:21.813449 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:36:21.813456 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:36:21.813463 | orchestrator | 2025-09-11 00:36:21.813471 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-09-11 00:36:21.813478 | orchestrator | 2025-09-11 00:36:21.813485 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-09-11 00:36:21.813492 | orchestrator | Thursday 11 September 2025 00:36:19 +0000 (0:00:00.596) 0:00:17.195 **** 2025-09-11 00:36:21.813500 | orchestrator | ok: [testbed-manager] 2025-09-11 00:36:21.813507 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:36:21.813514 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:36:21.813522 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:36:21.813529 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:36:21.813536 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:36:21.813543 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:36:21.813550 | orchestrator | 2025-09-11 00:36:21.813558 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:36:21.813566 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-11 00:36:21.813575 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-11 00:36:21.813582 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-11 00:36:21.813590 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-11 00:36:21.813597 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-11 00:36:21.813604 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-11 00:36:21.813611 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-11 00:36:21.813618 | orchestrator | 2025-09-11 00:36:21.813626 | orchestrator | 2025-09-11 00:36:21.813633 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:36:21.813640 | orchestrator | Thursday 11 September 2025 00:36:21 +0000 (0:00:02.657) 0:00:19.853 **** 2025-09-11 00:36:21.813653 | orchestrator | =============================================================================== 2025-09-11 00:36:21.813660 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.57s 2025-09-11 00:36:21.813668 | orchestrator | Install python3-docker -------------------------------------------------- 2.66s 2025-09-11 00:36:21.813675 | orchestrator | Apply netplan configuration --------------------------------------------- 2.03s 2025-09-11 00:36:21.813682 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.80s 2025-09-11 00:36:21.813689 | orchestrator | Apply netplan configuration --------------------------------------------- 1.73s 2025-09-11 00:36:21.813697 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.71s 2025-09-11 00:36:21.813704 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.58s 2025-09-11 00:36:21.813711 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.46s 2025-09-11 00:36:21.813722 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.43s 2025-09-11 00:36:21.813730 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.65s 2025-09-11 00:36:21.813737 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.60s 2025-09-11 00:36:21.813749 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.55s 2025-09-11 00:36:22.358471 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-09-11 00:36:34.368794 | orchestrator | 2025-09-11 00:36:34 | INFO  | Task 4bcca2f7-4e1e-4377-9184-22b6700f3a57 (reboot) was prepared for execution. 2025-09-11 00:36:34.368902 | orchestrator | 2025-09-11 00:36:34 | INFO  | It takes a moment until task 4bcca2f7-4e1e-4377-9184-22b6700f3a57 (reboot) has been started and output is visible here. 2025-09-11 00:36:42.932540 | orchestrator | 2025-09-11 00:36:42.932646 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-11 00:36:42.932664 | orchestrator | 2025-09-11 00:36:42.932715 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-11 00:36:42.932729 | orchestrator | Thursday 11 September 2025 00:36:37 +0000 (0:00:00.153) 0:00:00.153 **** 2025-09-11 00:36:42.932740 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:36:42.932751 | orchestrator | 2025-09-11 00:36:42.932762 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-11 00:36:42.932773 | orchestrator | Thursday 11 September 2025 00:36:37 +0000 (0:00:00.079) 0:00:00.233 **** 2025-09-11 00:36:42.932784 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:36:42.932795 | orchestrator | 2025-09-11 00:36:42.932806 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-11 00:36:42.932817 | orchestrator | Thursday 11 September 2025 00:36:38 +0000 (0:00:00.804) 0:00:01.038 **** 2025-09-11 00:36:42.932827 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:36:42.932838 | orchestrator | 2025-09-11 00:36:42.932849 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-11 00:36:42.932860 | orchestrator | 2025-09-11 00:36:42.932871 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-11 00:36:42.932882 | orchestrator | Thursday 11 September 2025 00:36:38 +0000 (0:00:00.110) 0:00:01.148 **** 2025-09-11 00:36:42.932892 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:36:42.932903 | orchestrator | 2025-09-11 00:36:42.932914 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-11 00:36:42.932924 | orchestrator | Thursday 11 September 2025 00:36:38 +0000 (0:00:00.096) 0:00:01.245 **** 2025-09-11 00:36:42.932935 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:36:42.932946 | orchestrator | 2025-09-11 00:36:42.932957 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-11 00:36:42.932967 | orchestrator | Thursday 11 September 2025 00:36:39 +0000 (0:00:00.576) 0:00:01.822 **** 2025-09-11 00:36:42.932978 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:36:42.932989 | orchestrator | 2025-09-11 00:36:42.933020 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-11 00:36:42.933032 | orchestrator | 2025-09-11 00:36:42.933043 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-11 00:36:42.933053 | orchestrator | Thursday 11 September 2025 00:36:39 +0000 (0:00:00.094) 0:00:01.916 **** 2025-09-11 00:36:42.933064 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:36:42.933074 | orchestrator | 2025-09-11 00:36:42.933085 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-11 00:36:42.933096 | orchestrator | Thursday 11 September 2025 00:36:39 +0000 (0:00:00.151) 0:00:02.068 **** 2025-09-11 00:36:42.933110 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:36:42.933122 | orchestrator | 2025-09-11 00:36:42.933134 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-11 00:36:42.933146 | orchestrator | Thursday 11 September 2025 00:36:40 +0000 (0:00:00.598) 0:00:02.667 **** 2025-09-11 00:36:42.933158 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:36:42.933170 | orchestrator | 2025-09-11 00:36:42.933182 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-11 00:36:42.933194 | orchestrator | 2025-09-11 00:36:42.933207 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-11 00:36:42.933239 | orchestrator | Thursday 11 September 2025 00:36:40 +0000 (0:00:00.093) 0:00:02.760 **** 2025-09-11 00:36:42.933252 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:36:42.933262 | orchestrator | 2025-09-11 00:36:42.933273 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-11 00:36:42.933284 | orchestrator | Thursday 11 September 2025 00:36:40 +0000 (0:00:00.080) 0:00:02.841 **** 2025-09-11 00:36:42.933295 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:36:42.933305 | orchestrator | 2025-09-11 00:36:42.933316 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-11 00:36:42.933327 | orchestrator | Thursday 11 September 2025 00:36:41 +0000 (0:00:00.606) 0:00:03.447 **** 2025-09-11 00:36:42.933337 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:36:42.933348 | orchestrator | 2025-09-11 00:36:42.933359 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-11 00:36:42.933369 | orchestrator | 2025-09-11 00:36:42.933380 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-11 00:36:42.933390 | orchestrator | Thursday 11 September 2025 00:36:41 +0000 (0:00:00.086) 0:00:03.534 **** 2025-09-11 00:36:42.933401 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:36:42.933412 | orchestrator | 2025-09-11 00:36:42.933422 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-11 00:36:42.933433 | orchestrator | Thursday 11 September 2025 00:36:41 +0000 (0:00:00.095) 0:00:03.630 **** 2025-09-11 00:36:42.933443 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:36:42.933454 | orchestrator | 2025-09-11 00:36:42.933465 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-11 00:36:42.933486 | orchestrator | Thursday 11 September 2025 00:36:41 +0000 (0:00:00.586) 0:00:04.216 **** 2025-09-11 00:36:42.933497 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:36:42.933507 | orchestrator | 2025-09-11 00:36:42.933518 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-11 00:36:42.933529 | orchestrator | 2025-09-11 00:36:42.933539 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-11 00:36:42.933550 | orchestrator | Thursday 11 September 2025 00:36:41 +0000 (0:00:00.101) 0:00:04.318 **** 2025-09-11 00:36:42.933561 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:36:42.933572 | orchestrator | 2025-09-11 00:36:42.933582 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-11 00:36:42.933593 | orchestrator | Thursday 11 September 2025 00:36:42 +0000 (0:00:00.092) 0:00:04.411 **** 2025-09-11 00:36:42.933603 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:36:42.933614 | orchestrator | 2025-09-11 00:36:42.933625 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-11 00:36:42.933644 | orchestrator | Thursday 11 September 2025 00:36:42 +0000 (0:00:00.634) 0:00:05.045 **** 2025-09-11 00:36:42.933670 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:36:42.933682 | orchestrator | 2025-09-11 00:36:42.933693 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:36:42.933704 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-11 00:36:42.933715 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-11 00:36:42.933726 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-11 00:36:42.933737 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-11 00:36:42.933747 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-11 00:36:42.933758 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-11 00:36:42.933769 | orchestrator | 2025-09-11 00:36:42.933780 | orchestrator | 2025-09-11 00:36:42.933790 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:36:42.933801 | orchestrator | Thursday 11 September 2025 00:36:42 +0000 (0:00:00.024) 0:00:05.070 **** 2025-09-11 00:36:42.933812 | orchestrator | =============================================================================== 2025-09-11 00:36:42.933823 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 3.81s 2025-09-11 00:36:42.933837 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.60s 2025-09-11 00:36:42.933849 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.51s 2025-09-11 00:36:43.124960 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-09-11 00:36:54.937079 | orchestrator | 2025-09-11 00:36:54 | INFO  | Task 35398dbb-3926-47ff-886b-b3b0983d5027 (wait-for-connection) was prepared for execution. 2025-09-11 00:36:54.937164 | orchestrator | 2025-09-11 00:36:54 | INFO  | It takes a moment until task 35398dbb-3926-47ff-886b-b3b0983d5027 (wait-for-connection) has been started and output is visible here. 2025-09-11 00:37:10.033887 | orchestrator | 2025-09-11 00:37:10.033993 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-09-11 00:37:10.034073 | orchestrator | 2025-09-11 00:37:10.034096 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-09-11 00:37:10.034114 | orchestrator | Thursday 11 September 2025 00:36:58 +0000 (0:00:00.174) 0:00:00.174 **** 2025-09-11 00:37:10.034130 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:37:10.034148 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:37:10.034165 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:37:10.034182 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:37:10.034198 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:37:10.034271 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:37:10.034288 | orchestrator | 2025-09-11 00:37:10.034305 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:37:10.034322 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:37:10.034339 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:37:10.034357 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:37:10.034403 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:37:10.034422 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:37:10.034440 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:37:10.034457 | orchestrator | 2025-09-11 00:37:10.034473 | orchestrator | 2025-09-11 00:37:10.034508 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:37:10.034531 | orchestrator | Thursday 11 September 2025 00:37:09 +0000 (0:00:11.396) 0:00:11.570 **** 2025-09-11 00:37:10.034550 | orchestrator | =============================================================================== 2025-09-11 00:37:10.034569 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.40s 2025-09-11 00:37:10.208531 | orchestrator | + osism apply hddtemp 2025-09-11 00:37:21.960006 | orchestrator | 2025-09-11 00:37:21 | INFO  | Task 03c47b52-414a-4a49-9ea1-83f7a1a7a3cf (hddtemp) was prepared for execution. 2025-09-11 00:37:21.960098 | orchestrator | 2025-09-11 00:37:21 | INFO  | It takes a moment until task 03c47b52-414a-4a49-9ea1-83f7a1a7a3cf (hddtemp) has been started and output is visible here. 2025-09-11 00:37:47.494457 | orchestrator | 2025-09-11 00:37:47.494584 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-09-11 00:37:47.494601 | orchestrator | 2025-09-11 00:37:47.494650 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-09-11 00:37:47.494663 | orchestrator | Thursday 11 September 2025 00:37:25 +0000 (0:00:00.256) 0:00:00.256 **** 2025-09-11 00:37:47.494673 | orchestrator | ok: [testbed-manager] 2025-09-11 00:37:47.494684 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:37:47.494695 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:37:47.494705 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:37:47.494714 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:37:47.494724 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:37:47.494733 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:37:47.494743 | orchestrator | 2025-09-11 00:37:47.494753 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-09-11 00:37:47.494762 | orchestrator | Thursday 11 September 2025 00:37:26 +0000 (0:00:00.635) 0:00:00.891 **** 2025-09-11 00:37:47.494775 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:37:47.494787 | orchestrator | 2025-09-11 00:37:47.494797 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-09-11 00:37:47.494807 | orchestrator | Thursday 11 September 2025 00:37:27 +0000 (0:00:01.002) 0:00:01.894 **** 2025-09-11 00:37:47.494816 | orchestrator | ok: [testbed-manager] 2025-09-11 00:37:47.494826 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:37:47.494836 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:37:47.494845 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:37:47.494855 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:37:47.494864 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:37:47.494873 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:37:47.494883 | orchestrator | 2025-09-11 00:37:47.494893 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-09-11 00:37:47.494902 | orchestrator | Thursday 11 September 2025 00:37:29 +0000 (0:00:01.842) 0:00:03.736 **** 2025-09-11 00:37:47.494912 | orchestrator | changed: [testbed-manager] 2025-09-11 00:37:47.494923 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:37:47.494932 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:37:47.494942 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:37:47.494951 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:37:47.494984 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:37:47.494994 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:37:47.495005 | orchestrator | 2025-09-11 00:37:47.495017 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-09-11 00:37:47.495028 | orchestrator | Thursday 11 September 2025 00:37:30 +0000 (0:00:00.967) 0:00:04.703 **** 2025-09-11 00:37:47.495039 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:37:47.495050 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:37:47.495061 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:37:47.495072 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:37:47.495083 | orchestrator | ok: [testbed-manager] 2025-09-11 00:37:47.495094 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:37:47.495104 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:37:47.495115 | orchestrator | 2025-09-11 00:37:47.495126 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-09-11 00:37:47.495137 | orchestrator | Thursday 11 September 2025 00:37:31 +0000 (0:00:01.064) 0:00:05.767 **** 2025-09-11 00:37:47.495148 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:37:47.495159 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:37:47.495170 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:37:47.495181 | orchestrator | changed: [testbed-manager] 2025-09-11 00:37:47.495213 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:37:47.495228 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:37:47.495245 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:37:47.495265 | orchestrator | 2025-09-11 00:37:47.495280 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-09-11 00:37:47.495297 | orchestrator | Thursday 11 September 2025 00:37:32 +0000 (0:00:00.662) 0:00:06.430 **** 2025-09-11 00:37:47.495310 | orchestrator | changed: [testbed-manager] 2025-09-11 00:37:47.495321 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:37:47.495331 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:37:47.495341 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:37:47.495350 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:37:47.495359 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:37:47.495369 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:37:47.495378 | orchestrator | 2025-09-11 00:37:47.495388 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-09-11 00:37:47.495397 | orchestrator | Thursday 11 September 2025 00:37:43 +0000 (0:00:11.961) 0:00:18.391 **** 2025-09-11 00:37:47.495408 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:37:47.495417 | orchestrator | 2025-09-11 00:37:47.495427 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-09-11 00:37:47.495436 | orchestrator | Thursday 11 September 2025 00:37:45 +0000 (0:00:01.288) 0:00:19.680 **** 2025-09-11 00:37:47.495446 | orchestrator | changed: [testbed-manager] 2025-09-11 00:37:47.495468 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:37:47.495478 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:37:47.495487 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:37:47.495497 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:37:47.495506 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:37:47.495515 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:37:47.495525 | orchestrator | 2025-09-11 00:37:47.495535 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:37:47.495544 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:37:47.495572 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-11 00:37:47.495583 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-11 00:37:47.495601 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-11 00:37:47.495611 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-11 00:37:47.495621 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-11 00:37:47.495630 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-11 00:37:47.495640 | orchestrator | 2025-09-11 00:37:47.495650 | orchestrator | 2025-09-11 00:37:47.495660 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:37:47.495669 | orchestrator | Thursday 11 September 2025 00:37:47 +0000 (0:00:01.871) 0:00:21.552 **** 2025-09-11 00:37:47.495679 | orchestrator | =============================================================================== 2025-09-11 00:37:47.495688 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 11.96s 2025-09-11 00:37:47.495698 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.87s 2025-09-11 00:37:47.495707 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.84s 2025-09-11 00:37:47.495716 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.29s 2025-09-11 00:37:47.495726 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.06s 2025-09-11 00:37:47.495735 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.00s 2025-09-11 00:37:47.495745 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 0.97s 2025-09-11 00:37:47.495754 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.66s 2025-09-11 00:37:47.495764 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.64s 2025-09-11 00:37:47.753311 | orchestrator | ++ semver latest 7.1.1 2025-09-11 00:37:47.795908 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-11 00:37:47.795980 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-11 00:37:47.795995 | orchestrator | + sudo systemctl restart manager.service 2025-09-11 00:38:01.534987 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-11 00:38:01.535080 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-11 00:38:01.535095 | orchestrator | + local max_attempts=60 2025-09-11 00:38:01.535107 | orchestrator | + local name=ceph-ansible 2025-09-11 00:38:01.535118 | orchestrator | + local attempt_num=1 2025-09-11 00:38:01.535129 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-11 00:38:01.564254 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-11 00:38:01.564317 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-11 00:38:01.564332 | orchestrator | + sleep 5 2025-09-11 00:38:06.566432 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-11 00:38:06.680389 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-11 00:38:06.680479 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-11 00:38:06.680495 | orchestrator | + sleep 5 2025-09-11 00:38:11.683665 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-11 00:38:11.721694 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-11 00:38:11.721776 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-11 00:38:11.721790 | orchestrator | + sleep 5 2025-09-11 00:38:16.726358 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-11 00:38:16.765331 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-11 00:38:16.765407 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-11 00:38:16.765422 | orchestrator | + sleep 5 2025-09-11 00:38:21.770405 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-11 00:38:21.807157 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-11 00:38:21.807306 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-11 00:38:21.807324 | orchestrator | + sleep 5 2025-09-11 00:38:26.812651 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-11 00:38:26.853902 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-11 00:38:26.853958 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-11 00:38:26.853971 | orchestrator | + sleep 5 2025-09-11 00:38:31.858135 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-11 00:38:31.892365 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-11 00:38:31.892467 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-11 00:38:31.892481 | orchestrator | + sleep 5 2025-09-11 00:38:36.899322 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-11 00:38:36.937608 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-11 00:38:36.937667 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-11 00:38:36.937681 | orchestrator | + sleep 5 2025-09-11 00:38:41.941135 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-11 00:38:41.973278 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-11 00:38:41.973363 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-11 00:38:41.973385 | orchestrator | + sleep 5 2025-09-11 00:38:46.976861 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-11 00:38:47.013290 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-11 00:38:47.013361 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-11 00:38:47.013376 | orchestrator | + sleep 5 2025-09-11 00:38:52.017323 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-11 00:38:52.082656 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-11 00:38:52.082757 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-11 00:38:52.082771 | orchestrator | + sleep 5 2025-09-11 00:38:57.086118 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-11 00:38:57.124062 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-11 00:38:57.124123 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-11 00:38:57.124134 | orchestrator | + sleep 5 2025-09-11 00:39:02.129029 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-11 00:39:02.167681 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-11 00:39:02.167776 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-11 00:39:02.167792 | orchestrator | + sleep 5 2025-09-11 00:39:07.173637 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-11 00:39:07.205113 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-11 00:39:07.205198 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-11 00:39:07.205213 | orchestrator | + local max_attempts=60 2025-09-11 00:39:07.205226 | orchestrator | + local name=kolla-ansible 2025-09-11 00:39:07.205238 | orchestrator | + local attempt_num=1 2025-09-11 00:39:07.205603 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-11 00:39:07.228956 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-11 00:39:07.228997 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-11 00:39:07.229012 | orchestrator | + local max_attempts=60 2025-09-11 00:39:07.229025 | orchestrator | + local name=osism-ansible 2025-09-11 00:39:07.229037 | orchestrator | + local attempt_num=1 2025-09-11 00:39:07.229343 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-11 00:39:07.260256 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-11 00:39:07.260331 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-11 00:39:07.260345 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-11 00:39:07.417587 | orchestrator | ARA in ceph-ansible already disabled. 2025-09-11 00:39:07.542858 | orchestrator | ARA in kolla-ansible already disabled. 2025-09-11 00:39:07.680536 | orchestrator | ARA in osism-ansible already disabled. 2025-09-11 00:39:07.801650 | orchestrator | ARA in osism-kubernetes already disabled. 2025-09-11 00:39:07.802221 | orchestrator | + osism apply gather-facts 2025-09-11 00:39:19.631642 | orchestrator | 2025-09-11 00:39:19 | INFO  | Task 6959ae66-7865-4bae-bc3f-5046a4eaea84 (gather-facts) was prepared for execution. 2025-09-11 00:39:19.631727 | orchestrator | 2025-09-11 00:39:19 | INFO  | It takes a moment until task 6959ae66-7865-4bae-bc3f-5046a4eaea84 (gather-facts) has been started and output is visible here. 2025-09-11 00:39:33.161782 | orchestrator | 2025-09-11 00:39:33.161875 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-11 00:39:33.161912 | orchestrator | 2025-09-11 00:39:33.161922 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-11 00:39:33.161932 | orchestrator | Thursday 11 September 2025 00:39:23 +0000 (0:00:00.178) 0:00:00.178 **** 2025-09-11 00:39:33.161942 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:39:33.161953 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:39:33.161962 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:39:33.161972 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:39:33.161981 | orchestrator | ok: [testbed-manager] 2025-09-11 00:39:33.161990 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:39:33.162000 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:39:33.162009 | orchestrator | 2025-09-11 00:39:33.162070 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-11 00:39:33.162081 | orchestrator | 2025-09-11 00:39:33.162091 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-11 00:39:33.162100 | orchestrator | Thursday 11 September 2025 00:39:32 +0000 (0:00:09.297) 0:00:09.476 **** 2025-09-11 00:39:33.162110 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:39:33.162128 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:39:33.162138 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:39:33.162180 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:39:33.162190 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:39:33.162200 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:39:33.162209 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:39:33.162219 | orchestrator | 2025-09-11 00:39:33.162228 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:39:33.162238 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-11 00:39:33.162248 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-11 00:39:33.162258 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-11 00:39:33.162267 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-11 00:39:33.162277 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-11 00:39:33.162287 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-11 00:39:33.162296 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-11 00:39:33.162306 | orchestrator | 2025-09-11 00:39:33.162316 | orchestrator | 2025-09-11 00:39:33.162325 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:39:33.162335 | orchestrator | Thursday 11 September 2025 00:39:32 +0000 (0:00:00.490) 0:00:09.966 **** 2025-09-11 00:39:33.162345 | orchestrator | =============================================================================== 2025-09-11 00:39:33.162367 | orchestrator | Gathers facts about hosts ----------------------------------------------- 9.30s 2025-09-11 00:39:33.162379 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.49s 2025-09-11 00:39:33.337897 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-09-11 00:39:33.348091 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-09-11 00:39:33.364593 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-09-11 00:39:33.382294 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-09-11 00:39:33.393130 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-09-11 00:39:33.404081 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-09-11 00:39:33.415021 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-09-11 00:39:33.433322 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-09-11 00:39:33.443759 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-09-11 00:39:33.459839 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-09-11 00:39:33.474794 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-09-11 00:39:33.491518 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-09-11 00:39:33.510561 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-09-11 00:39:33.529701 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-09-11 00:39:33.549641 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-09-11 00:39:33.567478 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-09-11 00:39:33.586771 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-09-11 00:39:33.607424 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-09-11 00:39:33.624939 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-09-11 00:39:33.639211 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-09-11 00:39:33.655543 | orchestrator | + [[ false == \t\r\u\e ]] 2025-09-11 00:39:33.763584 | orchestrator | ok: Runtime: 0:23:15.166667 2025-09-11 00:39:33.858661 | 2025-09-11 00:39:33.858803 | TASK [Deploy services] 2025-09-11 00:39:34.392451 | orchestrator | skipping: Conditional result was False 2025-09-11 00:39:34.411016 | 2025-09-11 00:39:34.411199 | TASK [Deploy in a nutshell] 2025-09-11 00:39:35.110572 | orchestrator | + set -e 2025-09-11 00:39:35.112208 | orchestrator | 2025-09-11 00:39:35.112242 | orchestrator | # PULL IMAGES 2025-09-11 00:39:35.112256 | orchestrator | 2025-09-11 00:39:35.112274 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-11 00:39:35.112295 | orchestrator | ++ export INTERACTIVE=false 2025-09-11 00:39:35.112309 | orchestrator | ++ INTERACTIVE=false 2025-09-11 00:39:35.112349 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-11 00:39:35.112371 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-11 00:39:35.112386 | orchestrator | + source /opt/manager-vars.sh 2025-09-11 00:39:35.112397 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-11 00:39:35.112415 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-11 00:39:35.112427 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-11 00:39:35.112445 | orchestrator | ++ CEPH_VERSION=reef 2025-09-11 00:39:35.112456 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-11 00:39:35.112474 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-11 00:39:35.112485 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-11 00:39:35.112499 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-11 00:39:35.112510 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-11 00:39:35.112522 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-11 00:39:35.112533 | orchestrator | ++ export ARA=false 2025-09-11 00:39:35.112544 | orchestrator | ++ ARA=false 2025-09-11 00:39:35.112555 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-11 00:39:35.112566 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-11 00:39:35.112607 | orchestrator | ++ export TEMPEST=true 2025-09-11 00:39:35.112618 | orchestrator | ++ TEMPEST=true 2025-09-11 00:39:35.112629 | orchestrator | ++ export IS_ZUUL=true 2025-09-11 00:39:35.112640 | orchestrator | ++ IS_ZUUL=true 2025-09-11 00:39:35.112651 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.14 2025-09-11 00:39:35.112662 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.14 2025-09-11 00:39:35.112672 | orchestrator | ++ export EXTERNAL_API=false 2025-09-11 00:39:35.112683 | orchestrator | ++ EXTERNAL_API=false 2025-09-11 00:39:35.112694 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-11 00:39:35.112705 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-11 00:39:35.112716 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-11 00:39:35.112726 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-11 00:39:35.112738 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-11 00:39:35.112748 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-11 00:39:35.112759 | orchestrator | + echo 2025-09-11 00:39:35.112770 | orchestrator | + echo '# PULL IMAGES' 2025-09-11 00:39:35.112781 | orchestrator | + echo 2025-09-11 00:39:35.112798 | orchestrator | ++ semver latest 7.0.0 2025-09-11 00:39:35.167212 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-11 00:39:35.167283 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-11 00:39:35.167296 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-09-11 00:39:36.756902 | orchestrator | 2025-09-11 00:39:36 | INFO  | Trying to run play pull-images in environment custom 2025-09-11 00:39:46.967324 | orchestrator | 2025-09-11 00:39:46 | INFO  | Task 72494610-86b7-47bb-bf91-b2acc4c00ba1 (pull-images) was prepared for execution. 2025-09-11 00:39:46.967421 | orchestrator | 2025-09-11 00:39:46 | INFO  | Task 72494610-86b7-47bb-bf91-b2acc4c00ba1 is running in background. No more output. Check ARA for logs. 2025-09-11 00:39:49.098276 | orchestrator | 2025-09-11 00:39:49 | INFO  | Trying to run play wipe-partitions in environment custom 2025-09-11 00:39:59.206476 | orchestrator | 2025-09-11 00:39:59 | INFO  | Task 58e55612-a560-4a5f-867c-bf86f4e27015 (wipe-partitions) was prepared for execution. 2025-09-11 00:39:59.206608 | orchestrator | 2025-09-11 00:39:59 | INFO  | It takes a moment until task 58e55612-a560-4a5f-867c-bf86f4e27015 (wipe-partitions) has been started and output is visible here. 2025-09-11 00:40:11.790727 | orchestrator | 2025-09-11 00:40:11.790872 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-09-11 00:40:11.790891 | orchestrator | 2025-09-11 00:40:11.790903 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-09-11 00:40:11.790920 | orchestrator | Thursday 11 September 2025 00:40:03 +0000 (0:00:00.121) 0:00:00.121 **** 2025-09-11 00:40:11.790932 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:40:11.790943 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:40:11.790955 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:40:11.790966 | orchestrator | 2025-09-11 00:40:11.790978 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-09-11 00:40:11.791013 | orchestrator | Thursday 11 September 2025 00:40:04 +0000 (0:00:01.555) 0:00:01.677 **** 2025-09-11 00:40:11.791025 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:40:11.791036 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:40:11.791050 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:40:11.791061 | orchestrator | 2025-09-11 00:40:11.791073 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-09-11 00:40:11.791083 | orchestrator | Thursday 11 September 2025 00:40:04 +0000 (0:00:00.226) 0:00:01.903 **** 2025-09-11 00:40:11.791095 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:40:11.791106 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:40:11.791116 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:40:11.791163 | orchestrator | 2025-09-11 00:40:11.791176 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-09-11 00:40:11.791187 | orchestrator | Thursday 11 September 2025 00:40:05 +0000 (0:00:00.618) 0:00:02.521 **** 2025-09-11 00:40:11.791198 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:40:11.791209 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:40:11.791220 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:40:11.791231 | orchestrator | 2025-09-11 00:40:11.791241 | orchestrator | TASK [Check device availability] *********************************************** 2025-09-11 00:40:11.791252 | orchestrator | Thursday 11 September 2025 00:40:05 +0000 (0:00:00.214) 0:00:02.736 **** 2025-09-11 00:40:11.791266 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-11 00:40:11.791282 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-11 00:40:11.791295 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-11 00:40:11.791308 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-11 00:40:11.791321 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-11 00:40:11.791333 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-11 00:40:11.791346 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-11 00:40:11.791359 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-11 00:40:11.791372 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-11 00:40:11.791384 | orchestrator | 2025-09-11 00:40:11.791397 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-09-11 00:40:11.791411 | orchestrator | Thursday 11 September 2025 00:40:06 +0000 (0:00:01.182) 0:00:03.918 **** 2025-09-11 00:40:11.791424 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-09-11 00:40:11.791436 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-09-11 00:40:11.791449 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-09-11 00:40:11.791462 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-09-11 00:40:11.791474 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-09-11 00:40:11.791486 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-09-11 00:40:11.791499 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-09-11 00:40:11.791512 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-09-11 00:40:11.791524 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-09-11 00:40:11.791536 | orchestrator | 2025-09-11 00:40:11.791549 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-09-11 00:40:11.791561 | orchestrator | Thursday 11 September 2025 00:40:08 +0000 (0:00:01.317) 0:00:05.236 **** 2025-09-11 00:40:11.791574 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-11 00:40:11.791586 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-11 00:40:11.791599 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-11 00:40:11.791611 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-11 00:40:11.791623 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-11 00:40:11.791634 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-11 00:40:11.791644 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-11 00:40:11.791665 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-11 00:40:11.791682 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-11 00:40:11.791693 | orchestrator | 2025-09-11 00:40:11.791704 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-09-11 00:40:11.791715 | orchestrator | Thursday 11 September 2025 00:40:10 +0000 (0:00:02.179) 0:00:07.415 **** 2025-09-11 00:40:11.791726 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:40:11.791736 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:40:11.791747 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:40:11.791758 | orchestrator | 2025-09-11 00:40:11.791768 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-09-11 00:40:11.791779 | orchestrator | Thursday 11 September 2025 00:40:10 +0000 (0:00:00.556) 0:00:07.972 **** 2025-09-11 00:40:11.791790 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:40:11.791801 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:40:11.791811 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:40:11.791822 | orchestrator | 2025-09-11 00:40:11.791833 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:40:11.791846 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-11 00:40:11.791857 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-11 00:40:11.791885 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-11 00:40:11.791896 | orchestrator | 2025-09-11 00:40:11.791907 | orchestrator | 2025-09-11 00:40:11.791918 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:40:11.791929 | orchestrator | Thursday 11 September 2025 00:40:11 +0000 (0:00:00.610) 0:00:08.582 **** 2025-09-11 00:40:11.791939 | orchestrator | =============================================================================== 2025-09-11 00:40:11.791950 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.18s 2025-09-11 00:40:11.791961 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 1.56s 2025-09-11 00:40:11.791971 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.32s 2025-09-11 00:40:11.791982 | orchestrator | Check device availability ----------------------------------------------- 1.18s 2025-09-11 00:40:11.791993 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.62s 2025-09-11 00:40:11.792004 | orchestrator | Request device events from the kernel ----------------------------------- 0.61s 2025-09-11 00:40:11.792014 | orchestrator | Reload udev rules ------------------------------------------------------- 0.56s 2025-09-11 00:40:11.792025 | orchestrator | Remove all rook related logical devices --------------------------------- 0.23s 2025-09-11 00:40:11.792036 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.21s 2025-09-11 00:40:23.668774 | orchestrator | 2025-09-11 00:40:23 | INFO  | Task a7727b02-63a4-405e-b801-113b7d056b11 (facts) was prepared for execution. 2025-09-11 00:40:23.668890 | orchestrator | 2025-09-11 00:40:23 | INFO  | It takes a moment until task a7727b02-63a4-405e-b801-113b7d056b11 (facts) has been started and output is visible here. 2025-09-11 00:40:35.137840 | orchestrator | 2025-09-11 00:40:35.137953 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-11 00:40:35.137968 | orchestrator | 2025-09-11 00:40:35.137979 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-11 00:40:35.137990 | orchestrator | Thursday 11 September 2025 00:40:27 +0000 (0:00:00.266) 0:00:00.266 **** 2025-09-11 00:40:35.138001 | orchestrator | ok: [testbed-manager] 2025-09-11 00:40:35.138012 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:40:35.138077 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:40:35.138113 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:40:35.138159 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:40:35.138169 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:40:35.138178 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:40:35.138188 | orchestrator | 2025-09-11 00:40:35.138198 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-11 00:40:35.138207 | orchestrator | Thursday 11 September 2025 00:40:28 +0000 (0:00:01.006) 0:00:01.272 **** 2025-09-11 00:40:35.138217 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:40:35.138227 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:40:35.138237 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:40:35.138246 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:40:35.138256 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:40:35.138266 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:40:35.138275 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:40:35.138285 | orchestrator | 2025-09-11 00:40:35.138294 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-11 00:40:35.138304 | orchestrator | 2025-09-11 00:40:35.138329 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-11 00:40:35.138339 | orchestrator | Thursday 11 September 2025 00:40:29 +0000 (0:00:01.175) 0:00:02.448 **** 2025-09-11 00:40:35.138349 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:40:35.138359 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:40:35.138369 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:40:35.138379 | orchestrator | ok: [testbed-manager] 2025-09-11 00:40:35.138390 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:40:35.138401 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:40:35.138412 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:40:35.138423 | orchestrator | 2025-09-11 00:40:35.138433 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-11 00:40:35.138444 | orchestrator | 2025-09-11 00:40:35.138455 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-11 00:40:35.138466 | orchestrator | Thursday 11 September 2025 00:40:34 +0000 (0:00:04.367) 0:00:06.816 **** 2025-09-11 00:40:35.138477 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:40:35.138488 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:40:35.138499 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:40:35.138510 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:40:35.138521 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:40:35.138531 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:40:35.138542 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:40:35.138553 | orchestrator | 2025-09-11 00:40:35.138564 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:40:35.138576 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-11 00:40:35.138588 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-11 00:40:35.138598 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-11 00:40:35.138607 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-11 00:40:35.138617 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-11 00:40:35.138626 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-11 00:40:35.138636 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-11 00:40:35.138646 | orchestrator | 2025-09-11 00:40:35.138663 | orchestrator | 2025-09-11 00:40:35.138672 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:40:35.138682 | orchestrator | Thursday 11 September 2025 00:40:34 +0000 (0:00:00.638) 0:00:07.454 **** 2025-09-11 00:40:35.138692 | orchestrator | =============================================================================== 2025-09-11 00:40:35.138701 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.37s 2025-09-11 00:40:35.138711 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.18s 2025-09-11 00:40:35.138720 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.01s 2025-09-11 00:40:35.138730 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.64s 2025-09-11 00:40:37.333448 | orchestrator | 2025-09-11 00:40:37 | INFO  | Task ee94c053-27ce-4b2c-b815-700f0b980449 (ceph-configure-lvm-volumes) was prepared for execution. 2025-09-11 00:40:37.333569 | orchestrator | 2025-09-11 00:40:37 | INFO  | It takes a moment until task ee94c053-27ce-4b2c-b815-700f0b980449 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-09-11 00:40:48.213676 | orchestrator | 2025-09-11 00:40:48.213768 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-11 00:40:48.213786 | orchestrator | 2025-09-11 00:40:48.213800 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-11 00:40:48.213812 | orchestrator | Thursday 11 September 2025 00:40:41 +0000 (0:00:00.317) 0:00:00.317 **** 2025-09-11 00:40:48.213823 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-11 00:40:48.213835 | orchestrator | 2025-09-11 00:40:48.213846 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-11 00:40:48.213856 | orchestrator | Thursday 11 September 2025 00:40:41 +0000 (0:00:00.240) 0:00:00.558 **** 2025-09-11 00:40:48.213867 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:40:48.213878 | orchestrator | 2025-09-11 00:40:48.213889 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:40:48.213900 | orchestrator | Thursday 11 September 2025 00:40:41 +0000 (0:00:00.207) 0:00:00.765 **** 2025-09-11 00:40:48.213911 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-11 00:40:48.213922 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-11 00:40:48.213937 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-11 00:40:48.213967 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-11 00:40:48.213988 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-11 00:40:48.214007 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-11 00:40:48.214103 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-11 00:40:48.214150 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-11 00:40:48.214171 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-11 00:40:48.214190 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-11 00:40:48.214208 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-11 00:40:48.214227 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-11 00:40:48.214247 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-11 00:40:48.214265 | orchestrator | 2025-09-11 00:40:48.214283 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:40:48.214294 | orchestrator | Thursday 11 September 2025 00:40:42 +0000 (0:00:00.344) 0:00:01.110 **** 2025-09-11 00:40:48.214305 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:40:48.214335 | orchestrator | 2025-09-11 00:40:48.214346 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:40:48.214357 | orchestrator | Thursday 11 September 2025 00:40:42 +0000 (0:00:00.425) 0:00:01.535 **** 2025-09-11 00:40:48.214367 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:40:48.214378 | orchestrator | 2025-09-11 00:40:48.214388 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:40:48.214399 | orchestrator | Thursday 11 September 2025 00:40:42 +0000 (0:00:00.222) 0:00:01.758 **** 2025-09-11 00:40:48.214409 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:40:48.214420 | orchestrator | 2025-09-11 00:40:48.214430 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:40:48.214441 | orchestrator | Thursday 11 September 2025 00:40:43 +0000 (0:00:00.176) 0:00:01.935 **** 2025-09-11 00:40:48.214452 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:40:48.214466 | orchestrator | 2025-09-11 00:40:48.214477 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:40:48.214488 | orchestrator | Thursday 11 September 2025 00:40:43 +0000 (0:00:00.180) 0:00:02.116 **** 2025-09-11 00:40:48.214498 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:40:48.214509 | orchestrator | 2025-09-11 00:40:48.214519 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:40:48.214530 | orchestrator | Thursday 11 September 2025 00:40:43 +0000 (0:00:00.182) 0:00:02.298 **** 2025-09-11 00:40:48.214541 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:40:48.214551 | orchestrator | 2025-09-11 00:40:48.214562 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:40:48.214572 | orchestrator | Thursday 11 September 2025 00:40:43 +0000 (0:00:00.191) 0:00:02.490 **** 2025-09-11 00:40:48.214583 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:40:48.214593 | orchestrator | 2025-09-11 00:40:48.214604 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:40:48.214615 | orchestrator | Thursday 11 September 2025 00:40:43 +0000 (0:00:00.171) 0:00:02.662 **** 2025-09-11 00:40:48.214625 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:40:48.214635 | orchestrator | 2025-09-11 00:40:48.214646 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:40:48.214657 | orchestrator | Thursday 11 September 2025 00:40:44 +0000 (0:00:00.192) 0:00:02.854 **** 2025-09-11 00:40:48.214667 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c) 2025-09-11 00:40:48.214679 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c) 2025-09-11 00:40:48.214689 | orchestrator | 2025-09-11 00:40:48.214700 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:40:48.214711 | orchestrator | Thursday 11 September 2025 00:40:44 +0000 (0:00:00.387) 0:00:03.242 **** 2025-09-11 00:40:48.214738 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_64a40e05-4c55-4984-8320-b8e17729d0c1) 2025-09-11 00:40:48.214749 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_64a40e05-4c55-4984-8320-b8e17729d0c1) 2025-09-11 00:40:48.214760 | orchestrator | 2025-09-11 00:40:48.214770 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:40:48.214781 | orchestrator | Thursday 11 September 2025 00:40:44 +0000 (0:00:00.394) 0:00:03.636 **** 2025-09-11 00:40:48.214798 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a8a1b225-42a0-4e26-b86d-f2993393243d) 2025-09-11 00:40:48.214809 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a8a1b225-42a0-4e26-b86d-f2993393243d) 2025-09-11 00:40:48.214820 | orchestrator | 2025-09-11 00:40:48.214830 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:40:48.214841 | orchestrator | Thursday 11 September 2025 00:40:45 +0000 (0:00:00.558) 0:00:04.195 **** 2025-09-11 00:40:48.214852 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8816fbb7-1f19-4e1f-8cd2-a94f4cca07a1) 2025-09-11 00:40:48.214870 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8816fbb7-1f19-4e1f-8cd2-a94f4cca07a1) 2025-09-11 00:40:48.214881 | orchestrator | 2025-09-11 00:40:48.214891 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:40:48.214902 | orchestrator | Thursday 11 September 2025 00:40:45 +0000 (0:00:00.549) 0:00:04.745 **** 2025-09-11 00:40:48.214913 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-11 00:40:48.214923 | orchestrator | 2025-09-11 00:40:48.214934 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:40:48.214945 | orchestrator | Thursday 11 September 2025 00:40:46 +0000 (0:00:00.540) 0:00:05.285 **** 2025-09-11 00:40:48.214963 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-11 00:40:48.214982 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-11 00:40:48.215001 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-11 00:40:48.215018 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-11 00:40:48.215036 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-11 00:40:48.215054 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-11 00:40:48.215072 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-11 00:40:48.215091 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-11 00:40:48.215133 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-11 00:40:48.215155 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-11 00:40:48.215167 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-11 00:40:48.215178 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-11 00:40:48.215189 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-11 00:40:48.215199 | orchestrator | 2025-09-11 00:40:48.215209 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:40:48.215220 | orchestrator | Thursday 11 September 2025 00:40:46 +0000 (0:00:00.363) 0:00:05.649 **** 2025-09-11 00:40:48.215230 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:40:48.215241 | orchestrator | 2025-09-11 00:40:48.215252 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:40:48.215262 | orchestrator | Thursday 11 September 2025 00:40:46 +0000 (0:00:00.182) 0:00:05.831 **** 2025-09-11 00:40:48.215272 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:40:48.215283 | orchestrator | 2025-09-11 00:40:48.215293 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:40:48.215304 | orchestrator | Thursday 11 September 2025 00:40:47 +0000 (0:00:00.184) 0:00:06.015 **** 2025-09-11 00:40:48.215314 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:40:48.215325 | orchestrator | 2025-09-11 00:40:48.215335 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:40:48.215346 | orchestrator | Thursday 11 September 2025 00:40:47 +0000 (0:00:00.163) 0:00:06.179 **** 2025-09-11 00:40:48.215356 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:40:48.215366 | orchestrator | 2025-09-11 00:40:48.215377 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:40:48.215388 | orchestrator | Thursday 11 September 2025 00:40:47 +0000 (0:00:00.170) 0:00:06.349 **** 2025-09-11 00:40:48.215398 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:40:48.215408 | orchestrator | 2025-09-11 00:40:48.215419 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:40:48.215439 | orchestrator | Thursday 11 September 2025 00:40:47 +0000 (0:00:00.166) 0:00:06.516 **** 2025-09-11 00:40:48.215449 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:40:48.215460 | orchestrator | 2025-09-11 00:40:48.215470 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:40:48.215481 | orchestrator | Thursday 11 September 2025 00:40:47 +0000 (0:00:00.197) 0:00:06.713 **** 2025-09-11 00:40:48.215491 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:40:48.215502 | orchestrator | 2025-09-11 00:40:48.215512 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:40:48.215523 | orchestrator | Thursday 11 September 2025 00:40:48 +0000 (0:00:00.170) 0:00:06.883 **** 2025-09-11 00:40:48.215542 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:40:55.183962 | orchestrator | 2025-09-11 00:40:55.184058 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:40:55.184070 | orchestrator | Thursday 11 September 2025 00:40:48 +0000 (0:00:00.171) 0:00:07.055 **** 2025-09-11 00:40:55.184078 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-11 00:40:55.184087 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-11 00:40:55.184095 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-11 00:40:55.184102 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-11 00:40:55.184154 | orchestrator | 2025-09-11 00:40:55.184162 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:40:55.184170 | orchestrator | Thursday 11 September 2025 00:40:49 +0000 (0:00:00.806) 0:00:07.862 **** 2025-09-11 00:40:55.184194 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:40:55.184202 | orchestrator | 2025-09-11 00:40:55.184210 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:40:55.184218 | orchestrator | Thursday 11 September 2025 00:40:49 +0000 (0:00:00.183) 0:00:08.045 **** 2025-09-11 00:40:55.184225 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:40:55.184233 | orchestrator | 2025-09-11 00:40:55.184240 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:40:55.184247 | orchestrator | Thursday 11 September 2025 00:40:49 +0000 (0:00:00.178) 0:00:08.224 **** 2025-09-11 00:40:55.184255 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:40:55.184262 | orchestrator | 2025-09-11 00:40:55.184269 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:40:55.184277 | orchestrator | Thursday 11 September 2025 00:40:49 +0000 (0:00:00.206) 0:00:08.431 **** 2025-09-11 00:40:55.184284 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:40:55.184291 | orchestrator | 2025-09-11 00:40:55.184299 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-11 00:40:55.184306 | orchestrator | Thursday 11 September 2025 00:40:49 +0000 (0:00:00.199) 0:00:08.630 **** 2025-09-11 00:40:55.184313 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-09-11 00:40:55.184321 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-09-11 00:40:55.184328 | orchestrator | 2025-09-11 00:40:55.184335 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-11 00:40:55.184342 | orchestrator | Thursday 11 September 2025 00:40:49 +0000 (0:00:00.175) 0:00:08.806 **** 2025-09-11 00:40:55.184350 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:40:55.184357 | orchestrator | 2025-09-11 00:40:55.184364 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-11 00:40:55.184371 | orchestrator | Thursday 11 September 2025 00:40:50 +0000 (0:00:00.128) 0:00:08.934 **** 2025-09-11 00:40:55.184379 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:40:55.184386 | orchestrator | 2025-09-11 00:40:55.184393 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-11 00:40:55.184400 | orchestrator | Thursday 11 September 2025 00:40:50 +0000 (0:00:00.128) 0:00:09.062 **** 2025-09-11 00:40:55.184407 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:40:55.184431 | orchestrator | 2025-09-11 00:40:55.184439 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-11 00:40:55.184446 | orchestrator | Thursday 11 September 2025 00:40:50 +0000 (0:00:00.128) 0:00:09.191 **** 2025-09-11 00:40:55.184453 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:40:55.184460 | orchestrator | 2025-09-11 00:40:55.184468 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-11 00:40:55.184475 | orchestrator | Thursday 11 September 2025 00:40:50 +0000 (0:00:00.132) 0:00:09.323 **** 2025-09-11 00:40:55.184482 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7f9f8cff-4bc3-57f6-8883-7f2afe56eba7'}}) 2025-09-11 00:40:55.184490 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0befa402-ebd4-5a4e-889f-8c71805f12b9'}}) 2025-09-11 00:40:55.184497 | orchestrator | 2025-09-11 00:40:55.184504 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-11 00:40:55.184512 | orchestrator | Thursday 11 September 2025 00:40:50 +0000 (0:00:00.162) 0:00:09.486 **** 2025-09-11 00:40:55.184521 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7f9f8cff-4bc3-57f6-8883-7f2afe56eba7'}})  2025-09-11 00:40:55.184537 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0befa402-ebd4-5a4e-889f-8c71805f12b9'}})  2025-09-11 00:40:55.184545 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:40:55.184553 | orchestrator | 2025-09-11 00:40:55.184562 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-11 00:40:55.184571 | orchestrator | Thursday 11 September 2025 00:40:50 +0000 (0:00:00.143) 0:00:09.629 **** 2025-09-11 00:40:55.184579 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7f9f8cff-4bc3-57f6-8883-7f2afe56eba7'}})  2025-09-11 00:40:55.184587 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0befa402-ebd4-5a4e-889f-8c71805f12b9'}})  2025-09-11 00:40:55.184596 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:40:55.184604 | orchestrator | 2025-09-11 00:40:55.184612 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-11 00:40:55.184620 | orchestrator | Thursday 11 September 2025 00:40:51 +0000 (0:00:00.337) 0:00:09.967 **** 2025-09-11 00:40:55.184628 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7f9f8cff-4bc3-57f6-8883-7f2afe56eba7'}})  2025-09-11 00:40:55.184637 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0befa402-ebd4-5a4e-889f-8c71805f12b9'}})  2025-09-11 00:40:55.184645 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:40:55.184654 | orchestrator | 2025-09-11 00:40:55.184676 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-11 00:40:55.184685 | orchestrator | Thursday 11 September 2025 00:40:51 +0000 (0:00:00.144) 0:00:10.113 **** 2025-09-11 00:40:55.184693 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:40:55.184701 | orchestrator | 2025-09-11 00:40:55.184709 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-11 00:40:55.184717 | orchestrator | Thursday 11 September 2025 00:40:51 +0000 (0:00:00.134) 0:00:10.247 **** 2025-09-11 00:40:55.184726 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:40:55.184734 | orchestrator | 2025-09-11 00:40:55.184742 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-11 00:40:55.184750 | orchestrator | Thursday 11 September 2025 00:40:51 +0000 (0:00:00.125) 0:00:10.374 **** 2025-09-11 00:40:55.184758 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:40:55.184766 | orchestrator | 2025-09-11 00:40:55.184774 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-11 00:40:55.184782 | orchestrator | Thursday 11 September 2025 00:40:51 +0000 (0:00:00.121) 0:00:10.495 **** 2025-09-11 00:40:55.184790 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:40:55.184799 | orchestrator | 2025-09-11 00:40:55.184813 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-11 00:40:55.184822 | orchestrator | Thursday 11 September 2025 00:40:51 +0000 (0:00:00.133) 0:00:10.629 **** 2025-09-11 00:40:55.184830 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:40:55.184838 | orchestrator | 2025-09-11 00:40:55.184847 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-11 00:40:55.184855 | orchestrator | Thursday 11 September 2025 00:40:51 +0000 (0:00:00.115) 0:00:10.745 **** 2025-09-11 00:40:55.184863 | orchestrator | ok: [testbed-node-3] => { 2025-09-11 00:40:55.184871 | orchestrator |  "ceph_osd_devices": { 2025-09-11 00:40:55.184879 | orchestrator |  "sdb": { 2025-09-11 00:40:55.184887 | orchestrator |  "osd_lvm_uuid": "7f9f8cff-4bc3-57f6-8883-7f2afe56eba7" 2025-09-11 00:40:55.184894 | orchestrator |  }, 2025-09-11 00:40:55.184901 | orchestrator |  "sdc": { 2025-09-11 00:40:55.184908 | orchestrator |  "osd_lvm_uuid": "0befa402-ebd4-5a4e-889f-8c71805f12b9" 2025-09-11 00:40:55.184915 | orchestrator |  } 2025-09-11 00:40:55.184922 | orchestrator |  } 2025-09-11 00:40:55.184930 | orchestrator | } 2025-09-11 00:40:55.184937 | orchestrator | 2025-09-11 00:40:55.184944 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-11 00:40:55.184951 | orchestrator | Thursday 11 September 2025 00:40:52 +0000 (0:00:00.144) 0:00:10.890 **** 2025-09-11 00:40:55.184958 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:40:55.184965 | orchestrator | 2025-09-11 00:40:55.184973 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-11 00:40:55.184980 | orchestrator | Thursday 11 September 2025 00:40:52 +0000 (0:00:00.125) 0:00:11.016 **** 2025-09-11 00:40:55.184990 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:40:55.184998 | orchestrator | 2025-09-11 00:40:55.185005 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-11 00:40:55.185012 | orchestrator | Thursday 11 September 2025 00:40:52 +0000 (0:00:00.129) 0:00:11.145 **** 2025-09-11 00:40:55.185019 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:40:55.185026 | orchestrator | 2025-09-11 00:40:55.185033 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-11 00:40:55.185040 | orchestrator | Thursday 11 September 2025 00:40:52 +0000 (0:00:00.133) 0:00:11.278 **** 2025-09-11 00:40:55.185047 | orchestrator | changed: [testbed-node-3] => { 2025-09-11 00:40:55.185054 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-11 00:40:55.185061 | orchestrator |  "ceph_osd_devices": { 2025-09-11 00:40:55.185068 | orchestrator |  "sdb": { 2025-09-11 00:40:55.185075 | orchestrator |  "osd_lvm_uuid": "7f9f8cff-4bc3-57f6-8883-7f2afe56eba7" 2025-09-11 00:40:55.185083 | orchestrator |  }, 2025-09-11 00:40:55.185090 | orchestrator |  "sdc": { 2025-09-11 00:40:55.185097 | orchestrator |  "osd_lvm_uuid": "0befa402-ebd4-5a4e-889f-8c71805f12b9" 2025-09-11 00:40:55.185104 | orchestrator |  } 2025-09-11 00:40:55.185127 | orchestrator |  }, 2025-09-11 00:40:55.185134 | orchestrator |  "lvm_volumes": [ 2025-09-11 00:40:55.185141 | orchestrator |  { 2025-09-11 00:40:55.185148 | orchestrator |  "data": "osd-block-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7", 2025-09-11 00:40:55.185155 | orchestrator |  "data_vg": "ceph-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7" 2025-09-11 00:40:55.185162 | orchestrator |  }, 2025-09-11 00:40:55.185170 | orchestrator |  { 2025-09-11 00:40:55.185177 | orchestrator |  "data": "osd-block-0befa402-ebd4-5a4e-889f-8c71805f12b9", 2025-09-11 00:40:55.185184 | orchestrator |  "data_vg": "ceph-0befa402-ebd4-5a4e-889f-8c71805f12b9" 2025-09-11 00:40:55.185191 | orchestrator |  } 2025-09-11 00:40:55.185198 | orchestrator |  ] 2025-09-11 00:40:55.185205 | orchestrator |  } 2025-09-11 00:40:55.185212 | orchestrator | } 2025-09-11 00:40:55.185220 | orchestrator | 2025-09-11 00:40:55.185227 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-11 00:40:55.185239 | orchestrator | Thursday 11 September 2025 00:40:52 +0000 (0:00:00.209) 0:00:11.488 **** 2025-09-11 00:40:55.185246 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-11 00:40:55.185253 | orchestrator | 2025-09-11 00:40:55.185260 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-11 00:40:55.185268 | orchestrator | 2025-09-11 00:40:55.185275 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-11 00:40:55.185282 | orchestrator | Thursday 11 September 2025 00:40:54 +0000 (0:00:02.061) 0:00:13.550 **** 2025-09-11 00:40:55.185289 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-11 00:40:55.185296 | orchestrator | 2025-09-11 00:40:55.185303 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-11 00:40:55.185310 | orchestrator | Thursday 11 September 2025 00:40:54 +0000 (0:00:00.249) 0:00:13.799 **** 2025-09-11 00:40:55.185317 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:40:55.185325 | orchestrator | 2025-09-11 00:40:55.185332 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:40:55.185343 | orchestrator | Thursday 11 September 2025 00:40:55 +0000 (0:00:00.229) 0:00:14.029 **** 2025-09-11 00:41:02.502839 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-11 00:41:02.502942 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-11 00:41:02.502958 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-11 00:41:02.502969 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-11 00:41:02.502981 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-11 00:41:02.502992 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-11 00:41:02.503003 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-11 00:41:02.503013 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-11 00:41:02.503024 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-11 00:41:02.503035 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-11 00:41:02.503065 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-11 00:41:02.503077 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-11 00:41:02.503088 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-11 00:41:02.503146 | orchestrator | 2025-09-11 00:41:02.503160 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:41:02.503173 | orchestrator | Thursday 11 September 2025 00:40:55 +0000 (0:00:00.377) 0:00:14.406 **** 2025-09-11 00:41:02.503184 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:41:02.503196 | orchestrator | 2025-09-11 00:41:02.503207 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:41:02.503218 | orchestrator | Thursday 11 September 2025 00:40:55 +0000 (0:00:00.209) 0:00:14.615 **** 2025-09-11 00:41:02.503229 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:41:02.503240 | orchestrator | 2025-09-11 00:41:02.503251 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:41:02.503262 | orchestrator | Thursday 11 September 2025 00:40:55 +0000 (0:00:00.216) 0:00:14.832 **** 2025-09-11 00:41:02.503273 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:41:02.503284 | orchestrator | 2025-09-11 00:41:02.503295 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:41:02.503306 | orchestrator | Thursday 11 September 2025 00:40:56 +0000 (0:00:00.192) 0:00:15.024 **** 2025-09-11 00:41:02.503317 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:41:02.503353 | orchestrator | 2025-09-11 00:41:02.503365 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:41:02.503379 | orchestrator | Thursday 11 September 2025 00:40:56 +0000 (0:00:00.201) 0:00:15.226 **** 2025-09-11 00:41:02.503392 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:41:02.503404 | orchestrator | 2025-09-11 00:41:02.503416 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:41:02.503429 | orchestrator | Thursday 11 September 2025 00:40:56 +0000 (0:00:00.491) 0:00:15.717 **** 2025-09-11 00:41:02.503442 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:41:02.503455 | orchestrator | 2025-09-11 00:41:02.503467 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:41:02.503478 | orchestrator | Thursday 11 September 2025 00:40:57 +0000 (0:00:00.187) 0:00:15.905 **** 2025-09-11 00:41:02.503488 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:41:02.503499 | orchestrator | 2025-09-11 00:41:02.503510 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:41:02.503521 | orchestrator | Thursday 11 September 2025 00:40:57 +0000 (0:00:00.187) 0:00:16.092 **** 2025-09-11 00:41:02.503532 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:41:02.503543 | orchestrator | 2025-09-11 00:41:02.503554 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:41:02.503565 | orchestrator | Thursday 11 September 2025 00:40:57 +0000 (0:00:00.193) 0:00:16.286 **** 2025-09-11 00:41:02.503576 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e) 2025-09-11 00:41:02.503588 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e) 2025-09-11 00:41:02.503599 | orchestrator | 2025-09-11 00:41:02.503610 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:41:02.503622 | orchestrator | Thursday 11 September 2025 00:40:57 +0000 (0:00:00.392) 0:00:16.679 **** 2025-09-11 00:41:02.503632 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2cf4645b-2040-4422-b411-f526d3d4b2d7) 2025-09-11 00:41:02.503644 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2cf4645b-2040-4422-b411-f526d3d4b2d7) 2025-09-11 00:41:02.503655 | orchestrator | 2025-09-11 00:41:02.503666 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:41:02.503677 | orchestrator | Thursday 11 September 2025 00:40:58 +0000 (0:00:00.401) 0:00:17.080 **** 2025-09-11 00:41:02.503687 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_8046510b-ad40-4feb-b71a-a7eb3fa57256) 2025-09-11 00:41:02.503699 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_8046510b-ad40-4feb-b71a-a7eb3fa57256) 2025-09-11 00:41:02.503710 | orchestrator | 2025-09-11 00:41:02.503721 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:41:02.503732 | orchestrator | Thursday 11 September 2025 00:40:58 +0000 (0:00:00.446) 0:00:17.527 **** 2025-09-11 00:41:02.503759 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d4713353-19f0-445a-bb8a-6a961d38a233) 2025-09-11 00:41:02.503772 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d4713353-19f0-445a-bb8a-6a961d38a233) 2025-09-11 00:41:02.503783 | orchestrator | 2025-09-11 00:41:02.503794 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:41:02.503805 | orchestrator | Thursday 11 September 2025 00:40:59 +0000 (0:00:00.420) 0:00:17.948 **** 2025-09-11 00:41:02.503816 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-11 00:41:02.503826 | orchestrator | 2025-09-11 00:41:02.503837 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:41:02.503854 | orchestrator | Thursday 11 September 2025 00:40:59 +0000 (0:00:00.321) 0:00:18.269 **** 2025-09-11 00:41:02.503866 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-11 00:41:02.503894 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-11 00:41:02.503905 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-11 00:41:02.503917 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-11 00:41:02.503927 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-11 00:41:02.503938 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-11 00:41:02.503949 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-11 00:41:02.503959 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-11 00:41:02.503970 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-11 00:41:02.503981 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-11 00:41:02.503992 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-11 00:41:02.504036 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-11 00:41:02.504048 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-11 00:41:02.504059 | orchestrator | 2025-09-11 00:41:02.504069 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:41:02.504080 | orchestrator | Thursday 11 September 2025 00:40:59 +0000 (0:00:00.363) 0:00:18.633 **** 2025-09-11 00:41:02.504091 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:41:02.504120 | orchestrator | 2025-09-11 00:41:02.504131 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:41:02.504143 | orchestrator | Thursday 11 September 2025 00:40:59 +0000 (0:00:00.192) 0:00:18.825 **** 2025-09-11 00:41:02.504154 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:41:02.504164 | orchestrator | 2025-09-11 00:41:02.504175 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:41:02.504186 | orchestrator | Thursday 11 September 2025 00:41:00 +0000 (0:00:00.546) 0:00:19.372 **** 2025-09-11 00:41:02.504197 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:41:02.504207 | orchestrator | 2025-09-11 00:41:02.504218 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:41:02.504229 | orchestrator | Thursday 11 September 2025 00:41:00 +0000 (0:00:00.199) 0:00:19.571 **** 2025-09-11 00:41:02.504240 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:41:02.504251 | orchestrator | 2025-09-11 00:41:02.504262 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:41:02.504273 | orchestrator | Thursday 11 September 2025 00:41:00 +0000 (0:00:00.195) 0:00:19.767 **** 2025-09-11 00:41:02.504284 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:41:02.504295 | orchestrator | 2025-09-11 00:41:02.504305 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:41:02.504316 | orchestrator | Thursday 11 September 2025 00:41:01 +0000 (0:00:00.185) 0:00:19.953 **** 2025-09-11 00:41:02.504327 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:41:02.504338 | orchestrator | 2025-09-11 00:41:02.504349 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:41:02.504359 | orchestrator | Thursday 11 September 2025 00:41:01 +0000 (0:00:00.193) 0:00:20.146 **** 2025-09-11 00:41:02.504370 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:41:02.504381 | orchestrator | 2025-09-11 00:41:02.504392 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:41:02.504403 | orchestrator | Thursday 11 September 2025 00:41:01 +0000 (0:00:00.183) 0:00:20.329 **** 2025-09-11 00:41:02.504413 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:41:02.504424 | orchestrator | 2025-09-11 00:41:02.504435 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:41:02.504454 | orchestrator | Thursday 11 September 2025 00:41:01 +0000 (0:00:00.183) 0:00:20.513 **** 2025-09-11 00:41:02.504464 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-11 00:41:02.504476 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-11 00:41:02.504487 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-11 00:41:02.504498 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-11 00:41:02.504509 | orchestrator | 2025-09-11 00:41:02.504520 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:41:02.504531 | orchestrator | Thursday 11 September 2025 00:41:02 +0000 (0:00:00.639) 0:00:21.152 **** 2025-09-11 00:41:02.504542 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:41:02.504552 | orchestrator | 2025-09-11 00:41:02.504571 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:41:08.725467 | orchestrator | Thursday 11 September 2025 00:41:02 +0000 (0:00:00.193) 0:00:21.346 **** 2025-09-11 00:41:08.725550 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:41:08.725564 | orchestrator | 2025-09-11 00:41:08.725575 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:41:08.725585 | orchestrator | Thursday 11 September 2025 00:41:02 +0000 (0:00:00.209) 0:00:21.555 **** 2025-09-11 00:41:08.725595 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:41:08.725605 | orchestrator | 2025-09-11 00:41:08.725614 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:41:08.725624 | orchestrator | Thursday 11 September 2025 00:41:02 +0000 (0:00:00.235) 0:00:21.791 **** 2025-09-11 00:41:08.725633 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:41:08.725643 | orchestrator | 2025-09-11 00:41:08.725668 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-11 00:41:08.725679 | orchestrator | Thursday 11 September 2025 00:41:03 +0000 (0:00:00.194) 0:00:21.985 **** 2025-09-11 00:41:08.725688 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-09-11 00:41:08.725698 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-09-11 00:41:08.725708 | orchestrator | 2025-09-11 00:41:08.725717 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-11 00:41:08.725727 | orchestrator | Thursday 11 September 2025 00:41:03 +0000 (0:00:00.368) 0:00:22.353 **** 2025-09-11 00:41:08.725737 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:41:08.725746 | orchestrator | 2025-09-11 00:41:08.725756 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-11 00:41:08.725766 | orchestrator | Thursday 11 September 2025 00:41:03 +0000 (0:00:00.145) 0:00:22.498 **** 2025-09-11 00:41:08.725776 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:41:08.725785 | orchestrator | 2025-09-11 00:41:08.725795 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-11 00:41:08.725804 | orchestrator | Thursday 11 September 2025 00:41:03 +0000 (0:00:00.144) 0:00:22.642 **** 2025-09-11 00:41:08.725814 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:41:08.725823 | orchestrator | 2025-09-11 00:41:08.725833 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-11 00:41:08.725842 | orchestrator | Thursday 11 September 2025 00:41:03 +0000 (0:00:00.133) 0:00:22.776 **** 2025-09-11 00:41:08.725852 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:41:08.725862 | orchestrator | 2025-09-11 00:41:08.725872 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-11 00:41:08.725881 | orchestrator | Thursday 11 September 2025 00:41:04 +0000 (0:00:00.139) 0:00:22.915 **** 2025-09-11 00:41:08.725891 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '344fe78f-9b90-543d-a55e-ac4ca1a09e29'}}) 2025-09-11 00:41:08.725901 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4b4178b7-2f3b-5f27-b2b6-7c3306310ac2'}}) 2025-09-11 00:41:08.725910 | orchestrator | 2025-09-11 00:41:08.725920 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-11 00:41:08.725950 | orchestrator | Thursday 11 September 2025 00:41:04 +0000 (0:00:00.160) 0:00:23.076 **** 2025-09-11 00:41:08.725961 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '344fe78f-9b90-543d-a55e-ac4ca1a09e29'}})  2025-09-11 00:41:08.725971 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4b4178b7-2f3b-5f27-b2b6-7c3306310ac2'}})  2025-09-11 00:41:08.725981 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:41:08.725990 | orchestrator | 2025-09-11 00:41:08.726000 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-11 00:41:08.726009 | orchestrator | Thursday 11 September 2025 00:41:04 +0000 (0:00:00.135) 0:00:23.211 **** 2025-09-11 00:41:08.726069 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '344fe78f-9b90-543d-a55e-ac4ca1a09e29'}})  2025-09-11 00:41:08.726080 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4b4178b7-2f3b-5f27-b2b6-7c3306310ac2'}})  2025-09-11 00:41:08.726091 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:41:08.726127 | orchestrator | 2025-09-11 00:41:08.726138 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-11 00:41:08.726176 | orchestrator | Thursday 11 September 2025 00:41:04 +0000 (0:00:00.152) 0:00:23.363 **** 2025-09-11 00:41:08.726187 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '344fe78f-9b90-543d-a55e-ac4ca1a09e29'}})  2025-09-11 00:41:08.726198 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4b4178b7-2f3b-5f27-b2b6-7c3306310ac2'}})  2025-09-11 00:41:08.726210 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:41:08.726220 | orchestrator | 2025-09-11 00:41:08.726231 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-11 00:41:08.726243 | orchestrator | Thursday 11 September 2025 00:41:04 +0000 (0:00:00.144) 0:00:23.508 **** 2025-09-11 00:41:08.726254 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:41:08.726265 | orchestrator | 2025-09-11 00:41:08.726276 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-11 00:41:08.726287 | orchestrator | Thursday 11 September 2025 00:41:04 +0000 (0:00:00.135) 0:00:23.643 **** 2025-09-11 00:41:08.726298 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:41:08.726309 | orchestrator | 2025-09-11 00:41:08.726319 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-11 00:41:08.726330 | orchestrator | Thursday 11 September 2025 00:41:04 +0000 (0:00:00.137) 0:00:23.781 **** 2025-09-11 00:41:08.726341 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:41:08.726352 | orchestrator | 2025-09-11 00:41:08.726377 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-11 00:41:08.726389 | orchestrator | Thursday 11 September 2025 00:41:05 +0000 (0:00:00.126) 0:00:23.908 **** 2025-09-11 00:41:08.726398 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:41:08.726408 | orchestrator | 2025-09-11 00:41:08.726417 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-11 00:41:08.726427 | orchestrator | Thursday 11 September 2025 00:41:05 +0000 (0:00:00.312) 0:00:24.220 **** 2025-09-11 00:41:08.726437 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:41:08.726446 | orchestrator | 2025-09-11 00:41:08.726456 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-11 00:41:08.726465 | orchestrator | Thursday 11 September 2025 00:41:05 +0000 (0:00:00.134) 0:00:24.355 **** 2025-09-11 00:41:08.726475 | orchestrator | ok: [testbed-node-4] => { 2025-09-11 00:41:08.726484 | orchestrator |  "ceph_osd_devices": { 2025-09-11 00:41:08.726494 | orchestrator |  "sdb": { 2025-09-11 00:41:08.726505 | orchestrator |  "osd_lvm_uuid": "344fe78f-9b90-543d-a55e-ac4ca1a09e29" 2025-09-11 00:41:08.726515 | orchestrator |  }, 2025-09-11 00:41:08.726524 | orchestrator |  "sdc": { 2025-09-11 00:41:08.726543 | orchestrator |  "osd_lvm_uuid": "4b4178b7-2f3b-5f27-b2b6-7c3306310ac2" 2025-09-11 00:41:08.726553 | orchestrator |  } 2025-09-11 00:41:08.726562 | orchestrator |  } 2025-09-11 00:41:08.726572 | orchestrator | } 2025-09-11 00:41:08.726582 | orchestrator | 2025-09-11 00:41:08.726592 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-11 00:41:08.726601 | orchestrator | Thursday 11 September 2025 00:41:05 +0000 (0:00:00.153) 0:00:24.509 **** 2025-09-11 00:41:08.726611 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:41:08.726620 | orchestrator | 2025-09-11 00:41:08.726636 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-11 00:41:08.726645 | orchestrator | Thursday 11 September 2025 00:41:05 +0000 (0:00:00.138) 0:00:24.647 **** 2025-09-11 00:41:08.726655 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:41:08.726664 | orchestrator | 2025-09-11 00:41:08.726674 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-11 00:41:08.726683 | orchestrator | Thursday 11 September 2025 00:41:05 +0000 (0:00:00.148) 0:00:24.796 **** 2025-09-11 00:41:08.726693 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:41:08.726702 | orchestrator | 2025-09-11 00:41:08.726712 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-11 00:41:08.726722 | orchestrator | Thursday 11 September 2025 00:41:06 +0000 (0:00:00.121) 0:00:24.918 **** 2025-09-11 00:41:08.726731 | orchestrator | changed: [testbed-node-4] => { 2025-09-11 00:41:08.726741 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-11 00:41:08.726751 | orchestrator |  "ceph_osd_devices": { 2025-09-11 00:41:08.726761 | orchestrator |  "sdb": { 2025-09-11 00:41:08.726770 | orchestrator |  "osd_lvm_uuid": "344fe78f-9b90-543d-a55e-ac4ca1a09e29" 2025-09-11 00:41:08.726784 | orchestrator |  }, 2025-09-11 00:41:08.726794 | orchestrator |  "sdc": { 2025-09-11 00:41:08.726804 | orchestrator |  "osd_lvm_uuid": "4b4178b7-2f3b-5f27-b2b6-7c3306310ac2" 2025-09-11 00:41:08.726813 | orchestrator |  } 2025-09-11 00:41:08.726823 | orchestrator |  }, 2025-09-11 00:41:08.726833 | orchestrator |  "lvm_volumes": [ 2025-09-11 00:41:08.726843 | orchestrator |  { 2025-09-11 00:41:08.726852 | orchestrator |  "data": "osd-block-344fe78f-9b90-543d-a55e-ac4ca1a09e29", 2025-09-11 00:41:08.726862 | orchestrator |  "data_vg": "ceph-344fe78f-9b90-543d-a55e-ac4ca1a09e29" 2025-09-11 00:41:08.726872 | orchestrator |  }, 2025-09-11 00:41:08.726881 | orchestrator |  { 2025-09-11 00:41:08.726891 | orchestrator |  "data": "osd-block-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2", 2025-09-11 00:41:08.726900 | orchestrator |  "data_vg": "ceph-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2" 2025-09-11 00:41:08.726910 | orchestrator |  } 2025-09-11 00:41:08.726920 | orchestrator |  ] 2025-09-11 00:41:08.726929 | orchestrator |  } 2025-09-11 00:41:08.726939 | orchestrator | } 2025-09-11 00:41:08.726949 | orchestrator | 2025-09-11 00:41:08.726958 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-11 00:41:08.726968 | orchestrator | Thursday 11 September 2025 00:41:06 +0000 (0:00:00.199) 0:00:25.118 **** 2025-09-11 00:41:08.726977 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-11 00:41:08.726987 | orchestrator | 2025-09-11 00:41:08.726997 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-11 00:41:08.727006 | orchestrator | 2025-09-11 00:41:08.727016 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-11 00:41:08.727025 | orchestrator | Thursday 11 September 2025 00:41:07 +0000 (0:00:01.029) 0:00:26.147 **** 2025-09-11 00:41:08.727035 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-11 00:41:08.727044 | orchestrator | 2025-09-11 00:41:08.727054 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-11 00:41:08.727063 | orchestrator | Thursday 11 September 2025 00:41:07 +0000 (0:00:00.457) 0:00:26.604 **** 2025-09-11 00:41:08.727080 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:41:08.727089 | orchestrator | 2025-09-11 00:41:08.727116 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:41:08.727126 | orchestrator | Thursday 11 September 2025 00:41:08 +0000 (0:00:00.615) 0:00:27.220 **** 2025-09-11 00:41:08.727136 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-11 00:41:08.727146 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-11 00:41:08.727155 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-11 00:41:08.727165 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-11 00:41:08.727174 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-11 00:41:08.727184 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-11 00:41:08.727200 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-11 00:41:15.272844 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-11 00:41:15.272939 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-11 00:41:15.272955 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-11 00:41:15.272966 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-11 00:41:15.272977 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-11 00:41:15.272988 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-11 00:41:15.272999 | orchestrator | 2025-09-11 00:41:15.273010 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:41:15.273021 | orchestrator | Thursday 11 September 2025 00:41:08 +0000 (0:00:00.348) 0:00:27.568 **** 2025-09-11 00:41:15.273032 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:41:15.273044 | orchestrator | 2025-09-11 00:41:15.273054 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:41:15.273065 | orchestrator | Thursday 11 September 2025 00:41:08 +0000 (0:00:00.140) 0:00:27.709 **** 2025-09-11 00:41:15.273076 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:41:15.273086 | orchestrator | 2025-09-11 00:41:15.273125 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:41:15.273139 | orchestrator | Thursday 11 September 2025 00:41:09 +0000 (0:00:00.149) 0:00:27.858 **** 2025-09-11 00:41:15.273149 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:41:15.273160 | orchestrator | 2025-09-11 00:41:15.273171 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:41:15.273182 | orchestrator | Thursday 11 September 2025 00:41:09 +0000 (0:00:00.142) 0:00:28.001 **** 2025-09-11 00:41:15.273192 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:41:15.273203 | orchestrator | 2025-09-11 00:41:15.273214 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:41:15.273224 | orchestrator | Thursday 11 September 2025 00:41:09 +0000 (0:00:00.142) 0:00:28.144 **** 2025-09-11 00:41:15.273235 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:41:15.273246 | orchestrator | 2025-09-11 00:41:15.273256 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:41:15.273267 | orchestrator | Thursday 11 September 2025 00:41:09 +0000 (0:00:00.141) 0:00:28.285 **** 2025-09-11 00:41:15.273278 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:41:15.273288 | orchestrator | 2025-09-11 00:41:15.273299 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:41:15.273310 | orchestrator | Thursday 11 September 2025 00:41:09 +0000 (0:00:00.141) 0:00:28.427 **** 2025-09-11 00:41:15.273321 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:41:15.273352 | orchestrator | 2025-09-11 00:41:15.273363 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:41:15.273374 | orchestrator | Thursday 11 September 2025 00:41:09 +0000 (0:00:00.132) 0:00:28.559 **** 2025-09-11 00:41:15.273385 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:41:15.273395 | orchestrator | 2025-09-11 00:41:15.273422 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:41:15.273435 | orchestrator | Thursday 11 September 2025 00:41:09 +0000 (0:00:00.142) 0:00:28.701 **** 2025-09-11 00:41:15.273448 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc) 2025-09-11 00:41:15.273462 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc) 2025-09-11 00:41:15.273474 | orchestrator | 2025-09-11 00:41:15.273486 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:41:15.273498 | orchestrator | Thursday 11 September 2025 00:41:10 +0000 (0:00:00.432) 0:00:29.134 **** 2025-09-11 00:41:15.273511 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f923a2d7-e50a-4a10-a63c-46b2772477f3) 2025-09-11 00:41:15.273523 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f923a2d7-e50a-4a10-a63c-46b2772477f3) 2025-09-11 00:41:15.273535 | orchestrator | 2025-09-11 00:41:15.273547 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:41:15.273560 | orchestrator | Thursday 11 September 2025 00:41:10 +0000 (0:00:00.610) 0:00:29.744 **** 2025-09-11 00:41:15.273572 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_75033e65-6f8e-4260-8d0b-0f414b2e283a) 2025-09-11 00:41:15.273585 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_75033e65-6f8e-4260-8d0b-0f414b2e283a) 2025-09-11 00:41:15.273597 | orchestrator | 2025-09-11 00:41:15.273609 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:41:15.273621 | orchestrator | Thursday 11 September 2025 00:41:11 +0000 (0:00:00.322) 0:00:30.067 **** 2025-09-11 00:41:15.273633 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7e2337cd-4ca4-43ee-9815-6c22aae7aa7a) 2025-09-11 00:41:15.273646 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7e2337cd-4ca4-43ee-9815-6c22aae7aa7a) 2025-09-11 00:41:15.273658 | orchestrator | 2025-09-11 00:41:15.273671 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:41:15.273683 | orchestrator | Thursday 11 September 2025 00:41:11 +0000 (0:00:00.407) 0:00:30.475 **** 2025-09-11 00:41:15.273695 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-11 00:41:15.273708 | orchestrator | 2025-09-11 00:41:15.273720 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:41:15.273732 | orchestrator | Thursday 11 September 2025 00:41:11 +0000 (0:00:00.290) 0:00:30.765 **** 2025-09-11 00:41:15.273760 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-11 00:41:15.273771 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-11 00:41:15.273782 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-11 00:41:15.273793 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-11 00:41:15.273803 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-11 00:41:15.273814 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-11 00:41:15.273824 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-11 00:41:15.273835 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-11 00:41:15.273846 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-11 00:41:15.273865 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-11 00:41:15.273875 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-11 00:41:15.273886 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-11 00:41:15.273896 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-11 00:41:15.273907 | orchestrator | 2025-09-11 00:41:15.273918 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:41:15.273928 | orchestrator | Thursday 11 September 2025 00:41:12 +0000 (0:00:00.351) 0:00:31.116 **** 2025-09-11 00:41:15.273939 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:41:15.273949 | orchestrator | 2025-09-11 00:41:15.273960 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:41:15.273970 | orchestrator | Thursday 11 September 2025 00:41:12 +0000 (0:00:00.181) 0:00:31.298 **** 2025-09-11 00:41:15.273981 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:41:15.273991 | orchestrator | 2025-09-11 00:41:15.274002 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:41:15.274064 | orchestrator | Thursday 11 September 2025 00:41:12 +0000 (0:00:00.165) 0:00:31.464 **** 2025-09-11 00:41:15.274079 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:41:15.274089 | orchestrator | 2025-09-11 00:41:15.274121 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:41:15.274132 | orchestrator | Thursday 11 September 2025 00:41:12 +0000 (0:00:00.195) 0:00:31.659 **** 2025-09-11 00:41:15.274143 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:41:15.274154 | orchestrator | 2025-09-11 00:41:15.274164 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:41:15.274175 | orchestrator | Thursday 11 September 2025 00:41:13 +0000 (0:00:00.196) 0:00:31.856 **** 2025-09-11 00:41:15.274185 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:41:15.274196 | orchestrator | 2025-09-11 00:41:15.274206 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:41:15.274217 | orchestrator | Thursday 11 September 2025 00:41:13 +0000 (0:00:00.173) 0:00:32.030 **** 2025-09-11 00:41:15.274227 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:41:15.274238 | orchestrator | 2025-09-11 00:41:15.274249 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:41:15.274259 | orchestrator | Thursday 11 September 2025 00:41:13 +0000 (0:00:00.484) 0:00:32.514 **** 2025-09-11 00:41:15.274270 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:41:15.274280 | orchestrator | 2025-09-11 00:41:15.274291 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:41:15.274301 | orchestrator | Thursday 11 September 2025 00:41:13 +0000 (0:00:00.187) 0:00:32.702 **** 2025-09-11 00:41:15.274312 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:41:15.274322 | orchestrator | 2025-09-11 00:41:15.274333 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:41:15.274343 | orchestrator | Thursday 11 September 2025 00:41:14 +0000 (0:00:00.185) 0:00:32.887 **** 2025-09-11 00:41:15.274354 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-11 00:41:15.274365 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-11 00:41:15.274375 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-11 00:41:15.274386 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-11 00:41:15.274396 | orchestrator | 2025-09-11 00:41:15.274407 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:41:15.274418 | orchestrator | Thursday 11 September 2025 00:41:14 +0000 (0:00:00.553) 0:00:33.441 **** 2025-09-11 00:41:15.274428 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:41:15.274439 | orchestrator | 2025-09-11 00:41:15.274449 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:41:15.274467 | orchestrator | Thursday 11 September 2025 00:41:14 +0000 (0:00:00.165) 0:00:33.606 **** 2025-09-11 00:41:15.274478 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:41:15.274488 | orchestrator | 2025-09-11 00:41:15.274499 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:41:15.274509 | orchestrator | Thursday 11 September 2025 00:41:14 +0000 (0:00:00.158) 0:00:33.765 **** 2025-09-11 00:41:15.274520 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:41:15.274531 | orchestrator | 2025-09-11 00:41:15.274541 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:41:15.274552 | orchestrator | Thursday 11 September 2025 00:41:15 +0000 (0:00:00.174) 0:00:33.939 **** 2025-09-11 00:41:15.274568 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:41:15.274579 | orchestrator | 2025-09-11 00:41:15.274590 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-11 00:41:15.274607 | orchestrator | Thursday 11 September 2025 00:41:15 +0000 (0:00:00.177) 0:00:34.117 **** 2025-09-11 00:41:18.903978 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-09-11 00:41:18.904066 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-09-11 00:41:18.904082 | orchestrator | 2025-09-11 00:41:18.904139 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-11 00:41:18.904152 | orchestrator | Thursday 11 September 2025 00:41:15 +0000 (0:00:00.155) 0:00:34.272 **** 2025-09-11 00:41:18.904164 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:41:18.904175 | orchestrator | 2025-09-11 00:41:18.904186 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-11 00:41:18.904197 | orchestrator | Thursday 11 September 2025 00:41:15 +0000 (0:00:00.138) 0:00:34.410 **** 2025-09-11 00:41:18.904208 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:41:18.904218 | orchestrator | 2025-09-11 00:41:18.904229 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-11 00:41:18.904240 | orchestrator | Thursday 11 September 2025 00:41:15 +0000 (0:00:00.124) 0:00:34.535 **** 2025-09-11 00:41:18.904250 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:41:18.904261 | orchestrator | 2025-09-11 00:41:18.904272 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-11 00:41:18.904282 | orchestrator | Thursday 11 September 2025 00:41:15 +0000 (0:00:00.127) 0:00:34.663 **** 2025-09-11 00:41:18.904293 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:41:18.904304 | orchestrator | 2025-09-11 00:41:18.904315 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-11 00:41:18.904326 | orchestrator | Thursday 11 September 2025 00:41:16 +0000 (0:00:00.248) 0:00:34.911 **** 2025-09-11 00:41:18.904338 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1fcfbff8-db79-5f3f-a505-ec8e716f38d6'}}) 2025-09-11 00:41:18.904349 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8a3e2512-7b8b-5f78-845d-17a09314c972'}}) 2025-09-11 00:41:18.904360 | orchestrator | 2025-09-11 00:41:18.904371 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-11 00:41:18.904381 | orchestrator | Thursday 11 September 2025 00:41:16 +0000 (0:00:00.151) 0:00:35.063 **** 2025-09-11 00:41:18.904393 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1fcfbff8-db79-5f3f-a505-ec8e716f38d6'}})  2025-09-11 00:41:18.904405 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8a3e2512-7b8b-5f78-845d-17a09314c972'}})  2025-09-11 00:41:18.904415 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:41:18.904426 | orchestrator | 2025-09-11 00:41:18.904452 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-11 00:41:18.904464 | orchestrator | Thursday 11 September 2025 00:41:16 +0000 (0:00:00.140) 0:00:35.204 **** 2025-09-11 00:41:18.904474 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1fcfbff8-db79-5f3f-a505-ec8e716f38d6'}})  2025-09-11 00:41:18.904503 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8a3e2512-7b8b-5f78-845d-17a09314c972'}})  2025-09-11 00:41:18.904515 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:41:18.904526 | orchestrator | 2025-09-11 00:41:18.904538 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-11 00:41:18.904550 | orchestrator | Thursday 11 September 2025 00:41:16 +0000 (0:00:00.141) 0:00:35.346 **** 2025-09-11 00:41:18.904563 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1fcfbff8-db79-5f3f-a505-ec8e716f38d6'}})  2025-09-11 00:41:18.904575 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8a3e2512-7b8b-5f78-845d-17a09314c972'}})  2025-09-11 00:41:18.904587 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:41:18.904599 | orchestrator | 2025-09-11 00:41:18.904611 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-11 00:41:18.904624 | orchestrator | Thursday 11 September 2025 00:41:16 +0000 (0:00:00.160) 0:00:35.506 **** 2025-09-11 00:41:18.904636 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:41:18.904649 | orchestrator | 2025-09-11 00:41:18.904663 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-11 00:41:18.904676 | orchestrator | Thursday 11 September 2025 00:41:16 +0000 (0:00:00.124) 0:00:35.631 **** 2025-09-11 00:41:18.904689 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:41:18.904701 | orchestrator | 2025-09-11 00:41:18.904713 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-11 00:41:18.904726 | orchestrator | Thursday 11 September 2025 00:41:16 +0000 (0:00:00.116) 0:00:35.748 **** 2025-09-11 00:41:18.904739 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:41:18.904751 | orchestrator | 2025-09-11 00:41:18.904764 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-11 00:41:18.904776 | orchestrator | Thursday 11 September 2025 00:41:17 +0000 (0:00:00.142) 0:00:35.890 **** 2025-09-11 00:41:18.904789 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:41:18.904802 | orchestrator | 2025-09-11 00:41:18.904814 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-11 00:41:18.904827 | orchestrator | Thursday 11 September 2025 00:41:17 +0000 (0:00:00.122) 0:00:36.013 **** 2025-09-11 00:41:18.904839 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:41:18.904852 | orchestrator | 2025-09-11 00:41:18.904864 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-11 00:41:18.904877 | orchestrator | Thursday 11 September 2025 00:41:17 +0000 (0:00:00.113) 0:00:36.126 **** 2025-09-11 00:41:18.904889 | orchestrator | ok: [testbed-node-5] => { 2025-09-11 00:41:18.904899 | orchestrator |  "ceph_osd_devices": { 2025-09-11 00:41:18.904911 | orchestrator |  "sdb": { 2025-09-11 00:41:18.904922 | orchestrator |  "osd_lvm_uuid": "1fcfbff8-db79-5f3f-a505-ec8e716f38d6" 2025-09-11 00:41:18.904950 | orchestrator |  }, 2025-09-11 00:41:18.904962 | orchestrator |  "sdc": { 2025-09-11 00:41:18.904973 | orchestrator |  "osd_lvm_uuid": "8a3e2512-7b8b-5f78-845d-17a09314c972" 2025-09-11 00:41:18.904984 | orchestrator |  } 2025-09-11 00:41:18.904995 | orchestrator |  } 2025-09-11 00:41:18.905006 | orchestrator | } 2025-09-11 00:41:18.905017 | orchestrator | 2025-09-11 00:41:18.905028 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-11 00:41:18.905039 | orchestrator | Thursday 11 September 2025 00:41:17 +0000 (0:00:00.112) 0:00:36.239 **** 2025-09-11 00:41:18.905049 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:41:18.905060 | orchestrator | 2025-09-11 00:41:18.905070 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-11 00:41:18.905081 | orchestrator | Thursday 11 September 2025 00:41:17 +0000 (0:00:00.120) 0:00:36.359 **** 2025-09-11 00:41:18.905092 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:41:18.905118 | orchestrator | 2025-09-11 00:41:18.905129 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-11 00:41:18.905148 | orchestrator | Thursday 11 September 2025 00:41:17 +0000 (0:00:00.247) 0:00:36.607 **** 2025-09-11 00:41:18.905159 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:41:18.905170 | orchestrator | 2025-09-11 00:41:18.905181 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-11 00:41:18.905191 | orchestrator | Thursday 11 September 2025 00:41:17 +0000 (0:00:00.123) 0:00:36.730 **** 2025-09-11 00:41:18.905202 | orchestrator | changed: [testbed-node-5] => { 2025-09-11 00:41:18.905213 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-11 00:41:18.905224 | orchestrator |  "ceph_osd_devices": { 2025-09-11 00:41:18.905235 | orchestrator |  "sdb": { 2025-09-11 00:41:18.905246 | orchestrator |  "osd_lvm_uuid": "1fcfbff8-db79-5f3f-a505-ec8e716f38d6" 2025-09-11 00:41:18.905257 | orchestrator |  }, 2025-09-11 00:41:18.905267 | orchestrator |  "sdc": { 2025-09-11 00:41:18.905279 | orchestrator |  "osd_lvm_uuid": "8a3e2512-7b8b-5f78-845d-17a09314c972" 2025-09-11 00:41:18.905289 | orchestrator |  } 2025-09-11 00:41:18.905300 | orchestrator |  }, 2025-09-11 00:41:18.905311 | orchestrator |  "lvm_volumes": [ 2025-09-11 00:41:18.905322 | orchestrator |  { 2025-09-11 00:41:18.905333 | orchestrator |  "data": "osd-block-1fcfbff8-db79-5f3f-a505-ec8e716f38d6", 2025-09-11 00:41:18.905343 | orchestrator |  "data_vg": "ceph-1fcfbff8-db79-5f3f-a505-ec8e716f38d6" 2025-09-11 00:41:18.905354 | orchestrator |  }, 2025-09-11 00:41:18.905365 | orchestrator |  { 2025-09-11 00:41:18.905375 | orchestrator |  "data": "osd-block-8a3e2512-7b8b-5f78-845d-17a09314c972", 2025-09-11 00:41:18.905386 | orchestrator |  "data_vg": "ceph-8a3e2512-7b8b-5f78-845d-17a09314c972" 2025-09-11 00:41:18.905397 | orchestrator |  } 2025-09-11 00:41:18.905408 | orchestrator |  ] 2025-09-11 00:41:18.905419 | orchestrator |  } 2025-09-11 00:41:18.905433 | orchestrator | } 2025-09-11 00:41:18.905444 | orchestrator | 2025-09-11 00:41:18.905455 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-11 00:41:18.905466 | orchestrator | Thursday 11 September 2025 00:41:18 +0000 (0:00:00.185) 0:00:36.916 **** 2025-09-11 00:41:18.905476 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-11 00:41:18.905487 | orchestrator | 2025-09-11 00:41:18.905498 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:41:18.905516 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-11 00:41:18.905527 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-11 00:41:18.905538 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-11 00:41:18.905549 | orchestrator | 2025-09-11 00:41:18.905560 | orchestrator | 2025-09-11 00:41:18.905570 | orchestrator | 2025-09-11 00:41:18.905581 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:41:18.905592 | orchestrator | Thursday 11 September 2025 00:41:18 +0000 (0:00:00.823) 0:00:37.739 **** 2025-09-11 00:41:18.905602 | orchestrator | =============================================================================== 2025-09-11 00:41:18.905613 | orchestrator | Write configuration file ------------------------------------------------ 3.92s 2025-09-11 00:41:18.905624 | orchestrator | Add known partitions to the list of available block devices ------------- 1.08s 2025-09-11 00:41:18.905634 | orchestrator | Add known links to the list of available block devices ------------------ 1.07s 2025-09-11 00:41:18.905645 | orchestrator | Get initial list of available block devices ----------------------------- 1.05s 2025-09-11 00:41:18.905656 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.95s 2025-09-11 00:41:18.905672 | orchestrator | Add known partitions to the list of available block devices ------------- 0.81s 2025-09-11 00:41:18.905683 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.70s 2025-09-11 00:41:18.905694 | orchestrator | Add known partitions to the list of available block devices ------------- 0.64s 2025-09-11 00:41:18.905704 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.63s 2025-09-11 00:41:18.905715 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2025-09-11 00:41:18.905726 | orchestrator | Print configuration data ------------------------------------------------ 0.59s 2025-09-11 00:41:18.905737 | orchestrator | Set WAL devices config data --------------------------------------------- 0.57s 2025-09-11 00:41:18.905747 | orchestrator | Add known links to the list of available block devices ------------------ 0.56s 2025-09-11 00:41:18.905758 | orchestrator | Add known partitions to the list of available block devices ------------- 0.55s 2025-09-11 00:41:18.905776 | orchestrator | Add known links to the list of available block devices ------------------ 0.55s 2025-09-11 00:41:19.122535 | orchestrator | Add known partitions to the list of available block devices ------------- 0.55s 2025-09-11 00:41:19.122616 | orchestrator | Add known links to the list of available block devices ------------------ 0.54s 2025-09-11 00:41:19.122628 | orchestrator | Print DB devices -------------------------------------------------------- 0.53s 2025-09-11 00:41:19.122639 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.52s 2025-09-11 00:41:19.122650 | orchestrator | Add known links to the list of available block devices ------------------ 0.49s 2025-09-11 00:41:41.315548 | orchestrator | 2025-09-11 00:41:41 | INFO  | Task 35682869-791c-46d4-9f2a-6778b8df119b (sync inventory) is running in background. Output coming soon. 2025-09-11 00:42:03.791099 | orchestrator | 2025-09-11 00:41:42 | INFO  | Starting group_vars file reorganization 2025-09-11 00:42:03.791195 | orchestrator | 2025-09-11 00:41:42 | INFO  | Moved 0 file(s) to their respective directories 2025-09-11 00:42:03.791210 | orchestrator | 2025-09-11 00:41:42 | INFO  | Group_vars file reorganization completed 2025-09-11 00:42:03.791222 | orchestrator | 2025-09-11 00:41:44 | INFO  | Starting variable preparation from inventory 2025-09-11 00:42:03.791233 | orchestrator | 2025-09-11 00:41:47 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-09-11 00:42:03.791244 | orchestrator | 2025-09-11 00:41:47 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-09-11 00:42:03.791255 | orchestrator | 2025-09-11 00:41:47 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-09-11 00:42:03.791266 | orchestrator | 2025-09-11 00:41:47 | INFO  | 3 file(s) written, 6 host(s) processed 2025-09-11 00:42:03.791277 | orchestrator | 2025-09-11 00:41:47 | INFO  | Variable preparation completed 2025-09-11 00:42:03.791288 | orchestrator | 2025-09-11 00:41:48 | INFO  | Starting inventory overwrite handling 2025-09-11 00:42:03.791299 | orchestrator | 2025-09-11 00:41:48 | INFO  | Handling group overwrites in 99-overwrite 2025-09-11 00:42:03.791311 | orchestrator | 2025-09-11 00:41:48 | INFO  | Removing group frr:children from 60-generic 2025-09-11 00:42:03.791321 | orchestrator | 2025-09-11 00:41:48 | INFO  | Removing group storage:children from 50-kolla 2025-09-11 00:42:03.791332 | orchestrator | 2025-09-11 00:41:48 | INFO  | Removing group netbird:children from 50-infrastruture 2025-09-11 00:42:03.791343 | orchestrator | 2025-09-11 00:41:48 | INFO  | Removing group ceph-mds from 50-ceph 2025-09-11 00:42:03.791354 | orchestrator | 2025-09-11 00:41:48 | INFO  | Removing group ceph-rgw from 50-ceph 2025-09-11 00:42:03.791365 | orchestrator | 2025-09-11 00:41:48 | INFO  | Handling group overwrites in 20-roles 2025-09-11 00:42:03.791376 | orchestrator | 2025-09-11 00:41:48 | INFO  | Removing group k3s_node from 50-infrastruture 2025-09-11 00:42:03.791411 | orchestrator | 2025-09-11 00:41:48 | INFO  | Removed 6 group(s) in total 2025-09-11 00:42:03.791423 | orchestrator | 2025-09-11 00:41:48 | INFO  | Inventory overwrite handling completed 2025-09-11 00:42:03.791434 | orchestrator | 2025-09-11 00:41:49 | INFO  | Starting merge of inventory files 2025-09-11 00:42:03.791444 | orchestrator | 2025-09-11 00:41:49 | INFO  | Inventory files merged successfully 2025-09-11 00:42:03.791455 | orchestrator | 2025-09-11 00:41:53 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-09-11 00:42:03.791466 | orchestrator | 2025-09-11 00:42:02 | INFO  | Successfully wrote ClusterShell configuration 2025-09-11 00:42:03.791477 | orchestrator | [master 5a9d1c8] 2025-09-11-00-42 2025-09-11 00:42:03.791489 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-09-11 00:42:05.603451 | orchestrator | 2025-09-11 00:42:05 | INFO  | Task e2e46772-b95d-4a80-96a9-44baa1f16138 (ceph-create-lvm-devices) was prepared for execution. 2025-09-11 00:42:05.603749 | orchestrator | 2025-09-11 00:42:05 | INFO  | It takes a moment until task e2e46772-b95d-4a80-96a9-44baa1f16138 (ceph-create-lvm-devices) has been started and output is visible here. 2025-09-11 00:42:15.013856 | orchestrator | 2025-09-11 00:42:15.013947 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-11 00:42:15.013976 | orchestrator | 2025-09-11 00:42:15.013989 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-11 00:42:15.014000 | orchestrator | Thursday 11 September 2025 00:42:08 +0000 (0:00:00.228) 0:00:00.228 **** 2025-09-11 00:42:15.014011 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-11 00:42:15.014140 | orchestrator | 2025-09-11 00:42:15.014311 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-11 00:42:15.014323 | orchestrator | Thursday 11 September 2025 00:42:08 +0000 (0:00:00.215) 0:00:00.443 **** 2025-09-11 00:42:15.014334 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:42:15.014345 | orchestrator | 2025-09-11 00:42:15.014356 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:15.014366 | orchestrator | Thursday 11 September 2025 00:42:09 +0000 (0:00:00.202) 0:00:00.646 **** 2025-09-11 00:42:15.014377 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-11 00:42:15.014389 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-11 00:42:15.014400 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-11 00:42:15.014414 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-11 00:42:15.014426 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-11 00:42:15.014438 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-11 00:42:15.014451 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-11 00:42:15.014463 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-11 00:42:15.014475 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-11 00:42:15.014488 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-11 00:42:15.014501 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-11 00:42:15.014513 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-11 00:42:15.014526 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-11 00:42:15.014538 | orchestrator | 2025-09-11 00:42:15.014551 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:15.014588 | orchestrator | Thursday 11 September 2025 00:42:09 +0000 (0:00:00.364) 0:00:01.011 **** 2025-09-11 00:42:15.014601 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:15.014613 | orchestrator | 2025-09-11 00:42:15.014626 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:15.014653 | orchestrator | Thursday 11 September 2025 00:42:09 +0000 (0:00:00.342) 0:00:01.354 **** 2025-09-11 00:42:15.014665 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:15.014678 | orchestrator | 2025-09-11 00:42:15.014690 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:15.014703 | orchestrator | Thursday 11 September 2025 00:42:09 +0000 (0:00:00.153) 0:00:01.507 **** 2025-09-11 00:42:15.014720 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:15.014733 | orchestrator | 2025-09-11 00:42:15.014746 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:15.014759 | orchestrator | Thursday 11 September 2025 00:42:10 +0000 (0:00:00.177) 0:00:01.684 **** 2025-09-11 00:42:15.014770 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:15.014780 | orchestrator | 2025-09-11 00:42:15.014791 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:15.014802 | orchestrator | Thursday 11 September 2025 00:42:10 +0000 (0:00:00.161) 0:00:01.846 **** 2025-09-11 00:42:15.014812 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:15.014823 | orchestrator | 2025-09-11 00:42:15.014833 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:15.014844 | orchestrator | Thursday 11 September 2025 00:42:10 +0000 (0:00:00.168) 0:00:02.014 **** 2025-09-11 00:42:15.014855 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:15.014865 | orchestrator | 2025-09-11 00:42:15.014876 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:15.014898 | orchestrator | Thursday 11 September 2025 00:42:10 +0000 (0:00:00.187) 0:00:02.202 **** 2025-09-11 00:42:15.014909 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:15.014919 | orchestrator | 2025-09-11 00:42:15.014930 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:15.014940 | orchestrator | Thursday 11 September 2025 00:42:10 +0000 (0:00:00.152) 0:00:02.354 **** 2025-09-11 00:42:15.014951 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:15.015245 | orchestrator | 2025-09-11 00:42:15.015261 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:15.015272 | orchestrator | Thursday 11 September 2025 00:42:10 +0000 (0:00:00.171) 0:00:02.525 **** 2025-09-11 00:42:15.015283 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c) 2025-09-11 00:42:15.015295 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c) 2025-09-11 00:42:15.015305 | orchestrator | 2025-09-11 00:42:15.015316 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:15.015327 | orchestrator | Thursday 11 September 2025 00:42:11 +0000 (0:00:00.325) 0:00:02.851 **** 2025-09-11 00:42:15.015357 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_64a40e05-4c55-4984-8320-b8e17729d0c1) 2025-09-11 00:42:15.015369 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_64a40e05-4c55-4984-8320-b8e17729d0c1) 2025-09-11 00:42:15.015379 | orchestrator | 2025-09-11 00:42:15.015390 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:15.015401 | orchestrator | Thursday 11 September 2025 00:42:11 +0000 (0:00:00.314) 0:00:03.165 **** 2025-09-11 00:42:15.015412 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a8a1b225-42a0-4e26-b86d-f2993393243d) 2025-09-11 00:42:15.015422 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a8a1b225-42a0-4e26-b86d-f2993393243d) 2025-09-11 00:42:15.015433 | orchestrator | 2025-09-11 00:42:15.015444 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:15.015465 | orchestrator | Thursday 11 September 2025 00:42:12 +0000 (0:00:00.599) 0:00:03.765 **** 2025-09-11 00:42:15.015476 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8816fbb7-1f19-4e1f-8cd2-a94f4cca07a1) 2025-09-11 00:42:15.015486 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8816fbb7-1f19-4e1f-8cd2-a94f4cca07a1) 2025-09-11 00:42:15.015497 | orchestrator | 2025-09-11 00:42:15.015508 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:15.015518 | orchestrator | Thursday 11 September 2025 00:42:12 +0000 (0:00:00.637) 0:00:04.402 **** 2025-09-11 00:42:15.015529 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-11 00:42:15.015539 | orchestrator | 2025-09-11 00:42:15.015550 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:42:15.015561 | orchestrator | Thursday 11 September 2025 00:42:13 +0000 (0:00:00.288) 0:00:04.691 **** 2025-09-11 00:42:15.015571 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-11 00:42:15.015582 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-11 00:42:15.015592 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-11 00:42:15.015603 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-11 00:42:15.015613 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-11 00:42:15.015624 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-11 00:42:15.015634 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-11 00:42:15.015645 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-11 00:42:15.015655 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-11 00:42:15.015666 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-11 00:42:15.015676 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-11 00:42:15.015687 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-11 00:42:15.015697 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-11 00:42:15.015708 | orchestrator | 2025-09-11 00:42:15.015718 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:42:15.015729 | orchestrator | Thursday 11 September 2025 00:42:13 +0000 (0:00:00.395) 0:00:05.086 **** 2025-09-11 00:42:15.015739 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:15.015750 | orchestrator | 2025-09-11 00:42:15.015761 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:42:15.015771 | orchestrator | Thursday 11 September 2025 00:42:13 +0000 (0:00:00.187) 0:00:05.273 **** 2025-09-11 00:42:15.015782 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:15.015792 | orchestrator | 2025-09-11 00:42:15.015803 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:42:15.015814 | orchestrator | Thursday 11 September 2025 00:42:13 +0000 (0:00:00.164) 0:00:05.438 **** 2025-09-11 00:42:15.015834 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:15.015845 | orchestrator | 2025-09-11 00:42:15.015856 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:42:15.015867 | orchestrator | Thursday 11 September 2025 00:42:14 +0000 (0:00:00.230) 0:00:05.669 **** 2025-09-11 00:42:15.015877 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:15.015887 | orchestrator | 2025-09-11 00:42:15.015898 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:42:15.015915 | orchestrator | Thursday 11 September 2025 00:42:14 +0000 (0:00:00.180) 0:00:05.849 **** 2025-09-11 00:42:15.015956 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:15.015968 | orchestrator | 2025-09-11 00:42:15.015979 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:42:15.015990 | orchestrator | Thursday 11 September 2025 00:42:14 +0000 (0:00:00.184) 0:00:06.034 **** 2025-09-11 00:42:15.016000 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:15.016011 | orchestrator | 2025-09-11 00:42:15.016022 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:42:15.016202 | orchestrator | Thursday 11 September 2025 00:42:14 +0000 (0:00:00.183) 0:00:06.217 **** 2025-09-11 00:42:15.016215 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:15.016225 | orchestrator | 2025-09-11 00:42:15.016236 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:42:15.016247 | orchestrator | Thursday 11 September 2025 00:42:14 +0000 (0:00:00.191) 0:00:06.408 **** 2025-09-11 00:42:15.016265 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:22.254613 | orchestrator | 2025-09-11 00:42:22.254721 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:42:22.254738 | orchestrator | Thursday 11 September 2025 00:42:15 +0000 (0:00:00.167) 0:00:06.576 **** 2025-09-11 00:42:22.254750 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-11 00:42:22.254762 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-11 00:42:22.254773 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-11 00:42:22.254784 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-11 00:42:22.254795 | orchestrator | 2025-09-11 00:42:22.254806 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:42:22.254817 | orchestrator | Thursday 11 September 2025 00:42:15 +0000 (0:00:00.820) 0:00:07.396 **** 2025-09-11 00:42:22.254828 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:22.254838 | orchestrator | 2025-09-11 00:42:22.254849 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:42:22.254860 | orchestrator | Thursday 11 September 2025 00:42:16 +0000 (0:00:00.182) 0:00:07.578 **** 2025-09-11 00:42:22.254871 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:22.254881 | orchestrator | 2025-09-11 00:42:22.254892 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:42:22.254903 | orchestrator | Thursday 11 September 2025 00:42:16 +0000 (0:00:00.167) 0:00:07.746 **** 2025-09-11 00:42:22.254960 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:22.254971 | orchestrator | 2025-09-11 00:42:22.255009 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:42:22.255021 | orchestrator | Thursday 11 September 2025 00:42:16 +0000 (0:00:00.168) 0:00:07.914 **** 2025-09-11 00:42:22.255452 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:22.255466 | orchestrator | 2025-09-11 00:42:22.255478 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-11 00:42:22.255490 | orchestrator | Thursday 11 September 2025 00:42:16 +0000 (0:00:00.184) 0:00:08.098 **** 2025-09-11 00:42:22.255503 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:22.255515 | orchestrator | 2025-09-11 00:42:22.255527 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-11 00:42:22.255537 | orchestrator | Thursday 11 September 2025 00:42:16 +0000 (0:00:00.113) 0:00:08.212 **** 2025-09-11 00:42:22.255548 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7f9f8cff-4bc3-57f6-8883-7f2afe56eba7'}}) 2025-09-11 00:42:22.255560 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0befa402-ebd4-5a4e-889f-8c71805f12b9'}}) 2025-09-11 00:42:22.255571 | orchestrator | 2025-09-11 00:42:22.255581 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-11 00:42:22.255592 | orchestrator | Thursday 11 September 2025 00:42:16 +0000 (0:00:00.158) 0:00:08.370 **** 2025-09-11 00:42:22.255604 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7', 'data_vg': 'ceph-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7'}) 2025-09-11 00:42:22.255637 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-0befa402-ebd4-5a4e-889f-8c71805f12b9', 'data_vg': 'ceph-0befa402-ebd4-5a4e-889f-8c71805f12b9'}) 2025-09-11 00:42:22.255648 | orchestrator | 2025-09-11 00:42:22.255672 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-11 00:42:22.255690 | orchestrator | Thursday 11 September 2025 00:42:18 +0000 (0:00:02.038) 0:00:10.408 **** 2025-09-11 00:42:22.255702 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7', 'data_vg': 'ceph-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7'})  2025-09-11 00:42:22.255714 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0befa402-ebd4-5a4e-889f-8c71805f12b9', 'data_vg': 'ceph-0befa402-ebd4-5a4e-889f-8c71805f12b9'})  2025-09-11 00:42:22.255725 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:22.255736 | orchestrator | 2025-09-11 00:42:22.255746 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-11 00:42:22.255757 | orchestrator | Thursday 11 September 2025 00:42:18 +0000 (0:00:00.145) 0:00:10.553 **** 2025-09-11 00:42:22.255768 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7', 'data_vg': 'ceph-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7'}) 2025-09-11 00:42:22.255779 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-0befa402-ebd4-5a4e-889f-8c71805f12b9', 'data_vg': 'ceph-0befa402-ebd4-5a4e-889f-8c71805f12b9'}) 2025-09-11 00:42:22.255790 | orchestrator | 2025-09-11 00:42:22.255800 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-11 00:42:22.255811 | orchestrator | Thursday 11 September 2025 00:42:20 +0000 (0:00:01.448) 0:00:12.002 **** 2025-09-11 00:42:22.255821 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7', 'data_vg': 'ceph-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7'})  2025-09-11 00:42:22.255833 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0befa402-ebd4-5a4e-889f-8c71805f12b9', 'data_vg': 'ceph-0befa402-ebd4-5a4e-889f-8c71805f12b9'})  2025-09-11 00:42:22.255844 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:22.255855 | orchestrator | 2025-09-11 00:42:22.255865 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-11 00:42:22.255876 | orchestrator | Thursday 11 September 2025 00:42:20 +0000 (0:00:00.132) 0:00:12.134 **** 2025-09-11 00:42:22.255887 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:22.255898 | orchestrator | 2025-09-11 00:42:22.255909 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-11 00:42:22.255937 | orchestrator | Thursday 11 September 2025 00:42:20 +0000 (0:00:00.120) 0:00:12.254 **** 2025-09-11 00:42:22.255949 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7', 'data_vg': 'ceph-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7'})  2025-09-11 00:42:22.255960 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0befa402-ebd4-5a4e-889f-8c71805f12b9', 'data_vg': 'ceph-0befa402-ebd4-5a4e-889f-8c71805f12b9'})  2025-09-11 00:42:22.255970 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:22.255981 | orchestrator | 2025-09-11 00:42:22.255992 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-11 00:42:22.256003 | orchestrator | Thursday 11 September 2025 00:42:20 +0000 (0:00:00.247) 0:00:12.502 **** 2025-09-11 00:42:22.256013 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:22.256051 | orchestrator | 2025-09-11 00:42:22.256062 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-11 00:42:22.256073 | orchestrator | Thursday 11 September 2025 00:42:21 +0000 (0:00:00.119) 0:00:12.622 **** 2025-09-11 00:42:22.256084 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7', 'data_vg': 'ceph-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7'})  2025-09-11 00:42:22.256104 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0befa402-ebd4-5a4e-889f-8c71805f12b9', 'data_vg': 'ceph-0befa402-ebd4-5a4e-889f-8c71805f12b9'})  2025-09-11 00:42:22.256115 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:22.256125 | orchestrator | 2025-09-11 00:42:22.256136 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-11 00:42:22.256147 | orchestrator | Thursday 11 September 2025 00:42:21 +0000 (0:00:00.134) 0:00:12.757 **** 2025-09-11 00:42:22.256157 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:22.256168 | orchestrator | 2025-09-11 00:42:22.256178 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-11 00:42:22.256189 | orchestrator | Thursday 11 September 2025 00:42:21 +0000 (0:00:00.121) 0:00:12.878 **** 2025-09-11 00:42:22.256200 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7', 'data_vg': 'ceph-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7'})  2025-09-11 00:42:22.256210 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0befa402-ebd4-5a4e-889f-8c71805f12b9', 'data_vg': 'ceph-0befa402-ebd4-5a4e-889f-8c71805f12b9'})  2025-09-11 00:42:22.256221 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:22.256232 | orchestrator | 2025-09-11 00:42:22.256253 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-11 00:42:22.256264 | orchestrator | Thursday 11 September 2025 00:42:21 +0000 (0:00:00.126) 0:00:13.004 **** 2025-09-11 00:42:22.256274 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:42:22.256285 | orchestrator | 2025-09-11 00:42:22.256296 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-11 00:42:22.256412 | orchestrator | Thursday 11 September 2025 00:42:21 +0000 (0:00:00.125) 0:00:13.130 **** 2025-09-11 00:42:22.256511 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7', 'data_vg': 'ceph-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7'})  2025-09-11 00:42:22.256564 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0befa402-ebd4-5a4e-889f-8c71805f12b9', 'data_vg': 'ceph-0befa402-ebd4-5a4e-889f-8c71805f12b9'})  2025-09-11 00:42:22.256744 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:22.256756 | orchestrator | 2025-09-11 00:42:22.256766 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-11 00:42:22.256777 | orchestrator | Thursday 11 September 2025 00:42:21 +0000 (0:00:00.141) 0:00:13.271 **** 2025-09-11 00:42:22.256788 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7', 'data_vg': 'ceph-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7'})  2025-09-11 00:42:22.256799 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0befa402-ebd4-5a4e-889f-8c71805f12b9', 'data_vg': 'ceph-0befa402-ebd4-5a4e-889f-8c71805f12b9'})  2025-09-11 00:42:22.256810 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:22.256821 | orchestrator | 2025-09-11 00:42:22.256832 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-11 00:42:22.256842 | orchestrator | Thursday 11 September 2025 00:42:21 +0000 (0:00:00.137) 0:00:13.409 **** 2025-09-11 00:42:22.256853 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7', 'data_vg': 'ceph-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7'})  2025-09-11 00:42:22.256864 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0befa402-ebd4-5a4e-889f-8c71805f12b9', 'data_vg': 'ceph-0befa402-ebd4-5a4e-889f-8c71805f12b9'})  2025-09-11 00:42:22.256875 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:22.256886 | orchestrator | 2025-09-11 00:42:22.256896 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-11 00:42:22.256907 | orchestrator | Thursday 11 September 2025 00:42:21 +0000 (0:00:00.135) 0:00:13.544 **** 2025-09-11 00:42:22.256918 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:22.256937 | orchestrator | 2025-09-11 00:42:22.256947 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-11 00:42:22.256958 | orchestrator | Thursday 11 September 2025 00:42:22 +0000 (0:00:00.131) 0:00:13.676 **** 2025-09-11 00:42:22.256969 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:22.256980 | orchestrator | 2025-09-11 00:42:22.256999 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-11 00:42:28.072678 | orchestrator | Thursday 11 September 2025 00:42:22 +0000 (0:00:00.136) 0:00:13.813 **** 2025-09-11 00:42:28.072786 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:28.072802 | orchestrator | 2025-09-11 00:42:28.072815 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-11 00:42:28.072827 | orchestrator | Thursday 11 September 2025 00:42:22 +0000 (0:00:00.120) 0:00:13.933 **** 2025-09-11 00:42:28.072838 | orchestrator | ok: [testbed-node-3] => { 2025-09-11 00:42:28.072849 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-11 00:42:28.072860 | orchestrator | } 2025-09-11 00:42:28.072872 | orchestrator | 2025-09-11 00:42:28.072883 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-11 00:42:28.072894 | orchestrator | Thursday 11 September 2025 00:42:22 +0000 (0:00:00.308) 0:00:14.242 **** 2025-09-11 00:42:28.072905 | orchestrator | ok: [testbed-node-3] => { 2025-09-11 00:42:28.072916 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-11 00:42:28.072927 | orchestrator | } 2025-09-11 00:42:28.072938 | orchestrator | 2025-09-11 00:42:28.072949 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-11 00:42:28.072960 | orchestrator | Thursday 11 September 2025 00:42:22 +0000 (0:00:00.131) 0:00:14.373 **** 2025-09-11 00:42:28.072970 | orchestrator | ok: [testbed-node-3] => { 2025-09-11 00:42:28.072981 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-11 00:42:28.072993 | orchestrator | } 2025-09-11 00:42:28.073005 | orchestrator | 2025-09-11 00:42:28.073044 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-11 00:42:28.073056 | orchestrator | Thursday 11 September 2025 00:42:22 +0000 (0:00:00.122) 0:00:14.495 **** 2025-09-11 00:42:28.073068 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:42:28.073079 | orchestrator | 2025-09-11 00:42:28.073089 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-11 00:42:28.073100 | orchestrator | Thursday 11 September 2025 00:42:23 +0000 (0:00:00.624) 0:00:15.120 **** 2025-09-11 00:42:28.073111 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:42:28.073122 | orchestrator | 2025-09-11 00:42:28.073133 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-11 00:42:28.073144 | orchestrator | Thursday 11 September 2025 00:42:24 +0000 (0:00:00.525) 0:00:15.645 **** 2025-09-11 00:42:28.073155 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:42:28.073166 | orchestrator | 2025-09-11 00:42:28.073177 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-11 00:42:28.073187 | orchestrator | Thursday 11 September 2025 00:42:24 +0000 (0:00:00.511) 0:00:16.157 **** 2025-09-11 00:42:28.073198 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:42:28.073209 | orchestrator | 2025-09-11 00:42:28.073222 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-11 00:42:28.073235 | orchestrator | Thursday 11 September 2025 00:42:24 +0000 (0:00:00.141) 0:00:16.299 **** 2025-09-11 00:42:28.073248 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:28.073261 | orchestrator | 2025-09-11 00:42:28.073274 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-11 00:42:28.073287 | orchestrator | Thursday 11 September 2025 00:42:24 +0000 (0:00:00.095) 0:00:16.395 **** 2025-09-11 00:42:28.073299 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:28.073313 | orchestrator | 2025-09-11 00:42:28.073326 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-11 00:42:28.073338 | orchestrator | Thursday 11 September 2025 00:42:24 +0000 (0:00:00.096) 0:00:16.491 **** 2025-09-11 00:42:28.073351 | orchestrator | ok: [testbed-node-3] => { 2025-09-11 00:42:28.073390 | orchestrator |  "vgs_report": { 2025-09-11 00:42:28.073405 | orchestrator |  "vg": [] 2025-09-11 00:42:28.073417 | orchestrator |  } 2025-09-11 00:42:28.073430 | orchestrator | } 2025-09-11 00:42:28.073443 | orchestrator | 2025-09-11 00:42:28.073456 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-11 00:42:28.073469 | orchestrator | Thursday 11 September 2025 00:42:25 +0000 (0:00:00.120) 0:00:16.611 **** 2025-09-11 00:42:28.073482 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:28.073495 | orchestrator | 2025-09-11 00:42:28.073508 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-11 00:42:28.073521 | orchestrator | Thursday 11 September 2025 00:42:25 +0000 (0:00:00.109) 0:00:16.721 **** 2025-09-11 00:42:28.073533 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:28.073546 | orchestrator | 2025-09-11 00:42:28.073559 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-11 00:42:28.073572 | orchestrator | Thursday 11 September 2025 00:42:25 +0000 (0:00:00.138) 0:00:16.859 **** 2025-09-11 00:42:28.073584 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:28.073597 | orchestrator | 2025-09-11 00:42:28.073609 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-11 00:42:28.073620 | orchestrator | Thursday 11 September 2025 00:42:25 +0000 (0:00:00.293) 0:00:17.153 **** 2025-09-11 00:42:28.073630 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:28.073641 | orchestrator | 2025-09-11 00:42:28.073652 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-11 00:42:28.073662 | orchestrator | Thursday 11 September 2025 00:42:25 +0000 (0:00:00.130) 0:00:17.284 **** 2025-09-11 00:42:28.073673 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:28.073684 | orchestrator | 2025-09-11 00:42:28.073711 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-11 00:42:28.073722 | orchestrator | Thursday 11 September 2025 00:42:25 +0000 (0:00:00.138) 0:00:17.423 **** 2025-09-11 00:42:28.073733 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:28.073744 | orchestrator | 2025-09-11 00:42:28.073755 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-11 00:42:28.073765 | orchestrator | Thursday 11 September 2025 00:42:25 +0000 (0:00:00.127) 0:00:17.550 **** 2025-09-11 00:42:28.073776 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:28.073787 | orchestrator | 2025-09-11 00:42:28.073797 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-11 00:42:28.073808 | orchestrator | Thursday 11 September 2025 00:42:26 +0000 (0:00:00.133) 0:00:17.684 **** 2025-09-11 00:42:28.073819 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:28.073830 | orchestrator | 2025-09-11 00:42:28.073840 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-11 00:42:28.073869 | orchestrator | Thursday 11 September 2025 00:42:26 +0000 (0:00:00.149) 0:00:17.834 **** 2025-09-11 00:42:28.073880 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:28.073891 | orchestrator | 2025-09-11 00:42:28.073902 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-11 00:42:28.073913 | orchestrator | Thursday 11 September 2025 00:42:26 +0000 (0:00:00.140) 0:00:17.974 **** 2025-09-11 00:42:28.073924 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:28.073934 | orchestrator | 2025-09-11 00:42:28.073945 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-11 00:42:28.073956 | orchestrator | Thursday 11 September 2025 00:42:26 +0000 (0:00:00.140) 0:00:18.114 **** 2025-09-11 00:42:28.073966 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:28.073977 | orchestrator | 2025-09-11 00:42:28.073988 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-11 00:42:28.073998 | orchestrator | Thursday 11 September 2025 00:42:26 +0000 (0:00:00.140) 0:00:18.255 **** 2025-09-11 00:42:28.074009 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:28.074092 | orchestrator | 2025-09-11 00:42:28.074114 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-11 00:42:28.074125 | orchestrator | Thursday 11 September 2025 00:42:26 +0000 (0:00:00.115) 0:00:18.371 **** 2025-09-11 00:42:28.074136 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:28.074147 | orchestrator | 2025-09-11 00:42:28.074158 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-11 00:42:28.074168 | orchestrator | Thursday 11 September 2025 00:42:26 +0000 (0:00:00.134) 0:00:18.505 **** 2025-09-11 00:42:28.074179 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:28.074190 | orchestrator | 2025-09-11 00:42:28.074201 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-11 00:42:28.074211 | orchestrator | Thursday 11 September 2025 00:42:27 +0000 (0:00:00.122) 0:00:18.628 **** 2025-09-11 00:42:28.074223 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7', 'data_vg': 'ceph-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7'})  2025-09-11 00:42:28.074235 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0befa402-ebd4-5a4e-889f-8c71805f12b9', 'data_vg': 'ceph-0befa402-ebd4-5a4e-889f-8c71805f12b9'})  2025-09-11 00:42:28.074246 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:28.074257 | orchestrator | 2025-09-11 00:42:28.074268 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-11 00:42:28.074279 | orchestrator | Thursday 11 September 2025 00:42:27 +0000 (0:00:00.268) 0:00:18.897 **** 2025-09-11 00:42:28.074289 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7', 'data_vg': 'ceph-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7'})  2025-09-11 00:42:28.074300 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0befa402-ebd4-5a4e-889f-8c71805f12b9', 'data_vg': 'ceph-0befa402-ebd4-5a4e-889f-8c71805f12b9'})  2025-09-11 00:42:28.074311 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:28.074322 | orchestrator | 2025-09-11 00:42:28.074333 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-11 00:42:28.074344 | orchestrator | Thursday 11 September 2025 00:42:27 +0000 (0:00:00.154) 0:00:19.052 **** 2025-09-11 00:42:28.074360 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7', 'data_vg': 'ceph-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7'})  2025-09-11 00:42:28.074371 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0befa402-ebd4-5a4e-889f-8c71805f12b9', 'data_vg': 'ceph-0befa402-ebd4-5a4e-889f-8c71805f12b9'})  2025-09-11 00:42:28.074382 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:28.074393 | orchestrator | 2025-09-11 00:42:28.074404 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-11 00:42:28.074415 | orchestrator | Thursday 11 September 2025 00:42:27 +0000 (0:00:00.137) 0:00:19.190 **** 2025-09-11 00:42:28.074425 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7', 'data_vg': 'ceph-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7'})  2025-09-11 00:42:28.074436 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0befa402-ebd4-5a4e-889f-8c71805f12b9', 'data_vg': 'ceph-0befa402-ebd4-5a4e-889f-8c71805f12b9'})  2025-09-11 00:42:28.074447 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:28.074458 | orchestrator | 2025-09-11 00:42:28.074468 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-11 00:42:28.074479 | orchestrator | Thursday 11 September 2025 00:42:27 +0000 (0:00:00.136) 0:00:19.326 **** 2025-09-11 00:42:28.074490 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7', 'data_vg': 'ceph-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7'})  2025-09-11 00:42:28.074501 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0befa402-ebd4-5a4e-889f-8c71805f12b9', 'data_vg': 'ceph-0befa402-ebd4-5a4e-889f-8c71805f12b9'})  2025-09-11 00:42:28.074511 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:28.074528 | orchestrator | 2025-09-11 00:42:28.074539 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-11 00:42:28.074550 | orchestrator | Thursday 11 September 2025 00:42:27 +0000 (0:00:00.160) 0:00:19.487 **** 2025-09-11 00:42:28.074561 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7', 'data_vg': 'ceph-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7'})  2025-09-11 00:42:28.074579 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0befa402-ebd4-5a4e-889f-8c71805f12b9', 'data_vg': 'ceph-0befa402-ebd4-5a4e-889f-8c71805f12b9'})  2025-09-11 00:42:33.102526 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:33.102622 | orchestrator | 2025-09-11 00:42:33.102634 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-11 00:42:33.102643 | orchestrator | Thursday 11 September 2025 00:42:28 +0000 (0:00:00.145) 0:00:19.632 **** 2025-09-11 00:42:33.102729 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7', 'data_vg': 'ceph-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7'})  2025-09-11 00:42:33.102738 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0befa402-ebd4-5a4e-889f-8c71805f12b9', 'data_vg': 'ceph-0befa402-ebd4-5a4e-889f-8c71805f12b9'})  2025-09-11 00:42:33.102745 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:33.102752 | orchestrator | 2025-09-11 00:42:33.102760 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-11 00:42:33.102767 | orchestrator | Thursday 11 September 2025 00:42:28 +0000 (0:00:00.158) 0:00:19.791 **** 2025-09-11 00:42:33.102775 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7', 'data_vg': 'ceph-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7'})  2025-09-11 00:42:33.102782 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0befa402-ebd4-5a4e-889f-8c71805f12b9', 'data_vg': 'ceph-0befa402-ebd4-5a4e-889f-8c71805f12b9'})  2025-09-11 00:42:33.102789 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:33.102796 | orchestrator | 2025-09-11 00:42:33.102804 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-11 00:42:33.102876 | orchestrator | Thursday 11 September 2025 00:42:28 +0000 (0:00:00.141) 0:00:19.932 **** 2025-09-11 00:42:33.102884 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:42:33.102892 | orchestrator | 2025-09-11 00:42:33.102899 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-11 00:42:33.102906 | orchestrator | Thursday 11 September 2025 00:42:28 +0000 (0:00:00.543) 0:00:20.475 **** 2025-09-11 00:42:33.102913 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:42:33.102920 | orchestrator | 2025-09-11 00:42:33.102969 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-11 00:42:33.102976 | orchestrator | Thursday 11 September 2025 00:42:29 +0000 (0:00:00.500) 0:00:20.975 **** 2025-09-11 00:42:33.102984 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:42:33.103010 | orchestrator | 2025-09-11 00:42:33.103046 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-11 00:42:33.103054 | orchestrator | Thursday 11 September 2025 00:42:29 +0000 (0:00:00.114) 0:00:21.090 **** 2025-09-11 00:42:33.103062 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-0befa402-ebd4-5a4e-889f-8c71805f12b9', 'vg_name': 'ceph-0befa402-ebd4-5a4e-889f-8c71805f12b9'}) 2025-09-11 00:42:33.103071 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7', 'vg_name': 'ceph-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7'}) 2025-09-11 00:42:33.103079 | orchestrator | 2025-09-11 00:42:33.103183 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-11 00:42:33.103197 | orchestrator | Thursday 11 September 2025 00:42:29 +0000 (0:00:00.170) 0:00:21.260 **** 2025-09-11 00:42:33.103206 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7', 'data_vg': 'ceph-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7'})  2025-09-11 00:42:33.103229 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0befa402-ebd4-5a4e-889f-8c71805f12b9', 'data_vg': 'ceph-0befa402-ebd4-5a4e-889f-8c71805f12b9'})  2025-09-11 00:42:33.103237 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:33.103245 | orchestrator | 2025-09-11 00:42:33.103254 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-11 00:42:33.103262 | orchestrator | Thursday 11 September 2025 00:42:29 +0000 (0:00:00.246) 0:00:21.506 **** 2025-09-11 00:42:33.103270 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7', 'data_vg': 'ceph-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7'})  2025-09-11 00:42:33.103279 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0befa402-ebd4-5a4e-889f-8c71805f12b9', 'data_vg': 'ceph-0befa402-ebd4-5a4e-889f-8c71805f12b9'})  2025-09-11 00:42:33.103287 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:33.103295 | orchestrator | 2025-09-11 00:42:33.103303 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-11 00:42:33.103312 | orchestrator | Thursday 11 September 2025 00:42:30 +0000 (0:00:00.144) 0:00:21.651 **** 2025-09-11 00:42:33.103320 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7', 'data_vg': 'ceph-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7'})  2025-09-11 00:42:33.103329 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0befa402-ebd4-5a4e-889f-8c71805f12b9', 'data_vg': 'ceph-0befa402-ebd4-5a4e-889f-8c71805f12b9'})  2025-09-11 00:42:33.103337 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:42:33.103352 | orchestrator | 2025-09-11 00:42:33.103361 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-11 00:42:33.103369 | orchestrator | Thursday 11 September 2025 00:42:30 +0000 (0:00:00.171) 0:00:21.822 **** 2025-09-11 00:42:33.103377 | orchestrator | ok: [testbed-node-3] => { 2025-09-11 00:42:33.103385 | orchestrator |  "lvm_report": { 2025-09-11 00:42:33.103394 | orchestrator |  "lv": [ 2025-09-11 00:42:33.103402 | orchestrator |  { 2025-09-11 00:42:33.103423 | orchestrator |  "lv_name": "osd-block-0befa402-ebd4-5a4e-889f-8c71805f12b9", 2025-09-11 00:42:33.103432 | orchestrator |  "vg_name": "ceph-0befa402-ebd4-5a4e-889f-8c71805f12b9" 2025-09-11 00:42:33.103487 | orchestrator |  }, 2025-09-11 00:42:33.103498 | orchestrator |  { 2025-09-11 00:42:33.103511 | orchestrator |  "lv_name": "osd-block-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7", 2025-09-11 00:42:33.103519 | orchestrator |  "vg_name": "ceph-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7" 2025-09-11 00:42:33.103544 | orchestrator |  } 2025-09-11 00:42:33.103553 | orchestrator |  ], 2025-09-11 00:42:33.103560 | orchestrator |  "pv": [ 2025-09-11 00:42:33.103567 | orchestrator |  { 2025-09-11 00:42:33.103575 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-11 00:42:33.103582 | orchestrator |  "vg_name": "ceph-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7" 2025-09-11 00:42:33.103589 | orchestrator |  }, 2025-09-11 00:42:33.103597 | orchestrator |  { 2025-09-11 00:42:33.103604 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-11 00:42:33.103661 | orchestrator |  "vg_name": "ceph-0befa402-ebd4-5a4e-889f-8c71805f12b9" 2025-09-11 00:42:33.103670 | orchestrator |  } 2025-09-11 00:42:33.103677 | orchestrator |  ] 2025-09-11 00:42:33.103684 | orchestrator |  } 2025-09-11 00:42:33.103692 | orchestrator | } 2025-09-11 00:42:33.103699 | orchestrator | 2025-09-11 00:42:33.103706 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-11 00:42:33.103714 | orchestrator | 2025-09-11 00:42:33.103721 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-11 00:42:33.103728 | orchestrator | Thursday 11 September 2025 00:42:30 +0000 (0:00:00.270) 0:00:22.093 **** 2025-09-11 00:42:33.103739 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-11 00:42:33.103755 | orchestrator | 2025-09-11 00:42:33.103766 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-11 00:42:33.103776 | orchestrator | Thursday 11 September 2025 00:42:30 +0000 (0:00:00.271) 0:00:22.365 **** 2025-09-11 00:42:33.103783 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:42:33.103790 | orchestrator | 2025-09-11 00:42:33.103797 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:33.103804 | orchestrator | Thursday 11 September 2025 00:42:31 +0000 (0:00:00.220) 0:00:22.586 **** 2025-09-11 00:42:33.103824 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-11 00:42:33.103832 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-11 00:42:33.103839 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-11 00:42:33.103847 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-11 00:42:33.103859 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-11 00:42:33.103866 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-11 00:42:33.103873 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-11 00:42:33.103884 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-11 00:42:33.103891 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-11 00:42:33.103899 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-11 00:42:33.103906 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-11 00:42:33.103913 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-11 00:42:33.103920 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-11 00:42:33.103927 | orchestrator | 2025-09-11 00:42:33.103934 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:33.103941 | orchestrator | Thursday 11 September 2025 00:42:31 +0000 (0:00:00.388) 0:00:22.975 **** 2025-09-11 00:42:33.103948 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:33.103955 | orchestrator | 2025-09-11 00:42:33.103962 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:33.103969 | orchestrator | Thursday 11 September 2025 00:42:31 +0000 (0:00:00.209) 0:00:23.184 **** 2025-09-11 00:42:33.103977 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:33.103984 | orchestrator | 2025-09-11 00:42:33.103991 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:33.103998 | orchestrator | Thursday 11 September 2025 00:42:31 +0000 (0:00:00.184) 0:00:23.369 **** 2025-09-11 00:42:33.104005 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:33.104048 | orchestrator | 2025-09-11 00:42:33.104056 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:33.104063 | orchestrator | Thursday 11 September 2025 00:42:32 +0000 (0:00:00.464) 0:00:23.833 **** 2025-09-11 00:42:33.104071 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:33.104079 | orchestrator | 2025-09-11 00:42:33.104091 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:33.104099 | orchestrator | Thursday 11 September 2025 00:42:32 +0000 (0:00:00.208) 0:00:24.041 **** 2025-09-11 00:42:33.104106 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:33.104113 | orchestrator | 2025-09-11 00:42:33.104120 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:33.104127 | orchestrator | Thursday 11 September 2025 00:42:32 +0000 (0:00:00.225) 0:00:24.267 **** 2025-09-11 00:42:33.104134 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:33.104141 | orchestrator | 2025-09-11 00:42:33.104154 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:33.104161 | orchestrator | Thursday 11 September 2025 00:42:32 +0000 (0:00:00.208) 0:00:24.476 **** 2025-09-11 00:42:33.104168 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:33.104176 | orchestrator | 2025-09-11 00:42:33.104189 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:42.567435 | orchestrator | Thursday 11 September 2025 00:42:33 +0000 (0:00:00.186) 0:00:24.662 **** 2025-09-11 00:42:42.567534 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:42.567551 | orchestrator | 2025-09-11 00:42:42.567563 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:42.567574 | orchestrator | Thursday 11 September 2025 00:42:33 +0000 (0:00:00.222) 0:00:24.884 **** 2025-09-11 00:42:42.567585 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e) 2025-09-11 00:42:42.567597 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e) 2025-09-11 00:42:42.567608 | orchestrator | 2025-09-11 00:42:42.567619 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:42.567629 | orchestrator | Thursday 11 September 2025 00:42:33 +0000 (0:00:00.438) 0:00:25.323 **** 2025-09-11 00:42:42.567640 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2cf4645b-2040-4422-b411-f526d3d4b2d7) 2025-09-11 00:42:42.567651 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2cf4645b-2040-4422-b411-f526d3d4b2d7) 2025-09-11 00:42:42.567661 | orchestrator | 2025-09-11 00:42:42.567672 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:42.567683 | orchestrator | Thursday 11 September 2025 00:42:34 +0000 (0:00:00.469) 0:00:25.793 **** 2025-09-11 00:42:42.567694 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_8046510b-ad40-4feb-b71a-a7eb3fa57256) 2025-09-11 00:42:42.567705 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_8046510b-ad40-4feb-b71a-a7eb3fa57256) 2025-09-11 00:42:42.567715 | orchestrator | 2025-09-11 00:42:42.567726 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:42.567737 | orchestrator | Thursday 11 September 2025 00:42:34 +0000 (0:00:00.403) 0:00:26.196 **** 2025-09-11 00:42:42.567748 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d4713353-19f0-445a-bb8a-6a961d38a233) 2025-09-11 00:42:42.567759 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d4713353-19f0-445a-bb8a-6a961d38a233) 2025-09-11 00:42:42.567769 | orchestrator | 2025-09-11 00:42:42.567780 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:42.567791 | orchestrator | Thursday 11 September 2025 00:42:35 +0000 (0:00:00.448) 0:00:26.645 **** 2025-09-11 00:42:42.567801 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-11 00:42:42.567812 | orchestrator | 2025-09-11 00:42:42.567823 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:42:42.567833 | orchestrator | Thursday 11 September 2025 00:42:35 +0000 (0:00:00.291) 0:00:26.937 **** 2025-09-11 00:42:42.567844 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-11 00:42:42.567868 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-11 00:42:42.567879 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-11 00:42:42.567890 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-11 00:42:42.567901 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-11 00:42:42.567911 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-11 00:42:42.567922 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-11 00:42:42.567956 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-11 00:42:42.567969 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-11 00:42:42.567981 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-11 00:42:42.567993 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-11 00:42:42.568026 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-11 00:42:42.568039 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-11 00:42:42.568051 | orchestrator | 2025-09-11 00:42:42.568064 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:42:42.568076 | orchestrator | Thursday 11 September 2025 00:42:35 +0000 (0:00:00.479) 0:00:27.416 **** 2025-09-11 00:42:42.568089 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:42.568102 | orchestrator | 2025-09-11 00:42:42.568114 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:42:42.568126 | orchestrator | Thursday 11 September 2025 00:42:36 +0000 (0:00:00.189) 0:00:27.606 **** 2025-09-11 00:42:42.568138 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:42.568151 | orchestrator | 2025-09-11 00:42:42.568163 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:42:42.568175 | orchestrator | Thursday 11 September 2025 00:42:36 +0000 (0:00:00.186) 0:00:27.793 **** 2025-09-11 00:42:42.568188 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:42.568199 | orchestrator | 2025-09-11 00:42:42.568212 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:42:42.568224 | orchestrator | Thursday 11 September 2025 00:42:36 +0000 (0:00:00.180) 0:00:27.973 **** 2025-09-11 00:42:42.568236 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:42.568248 | orchestrator | 2025-09-11 00:42:42.568276 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:42:42.568290 | orchestrator | Thursday 11 September 2025 00:42:36 +0000 (0:00:00.186) 0:00:28.160 **** 2025-09-11 00:42:42.568302 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:42.568314 | orchestrator | 2025-09-11 00:42:42.568325 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:42:42.568335 | orchestrator | Thursday 11 September 2025 00:42:36 +0000 (0:00:00.189) 0:00:28.350 **** 2025-09-11 00:42:42.568346 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:42.568356 | orchestrator | 2025-09-11 00:42:42.568367 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:42:42.568378 | orchestrator | Thursday 11 September 2025 00:42:36 +0000 (0:00:00.184) 0:00:28.534 **** 2025-09-11 00:42:42.568388 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:42.568399 | orchestrator | 2025-09-11 00:42:42.568409 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:42:42.568420 | orchestrator | Thursday 11 September 2025 00:42:37 +0000 (0:00:00.177) 0:00:28.711 **** 2025-09-11 00:42:42.568431 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:42.568441 | orchestrator | 2025-09-11 00:42:42.568452 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:42:42.568462 | orchestrator | Thursday 11 September 2025 00:42:37 +0000 (0:00:00.183) 0:00:28.895 **** 2025-09-11 00:42:42.568473 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-11 00:42:42.568484 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-11 00:42:42.568495 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-11 00:42:42.568505 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-11 00:42:42.568516 | orchestrator | 2025-09-11 00:42:42.568527 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:42:42.568538 | orchestrator | Thursday 11 September 2025 00:42:38 +0000 (0:00:00.692) 0:00:29.587 **** 2025-09-11 00:42:42.568556 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:42.568567 | orchestrator | 2025-09-11 00:42:42.568578 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:42:42.568588 | orchestrator | Thursday 11 September 2025 00:42:38 +0000 (0:00:00.180) 0:00:29.768 **** 2025-09-11 00:42:42.568599 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:42.568610 | orchestrator | 2025-09-11 00:42:42.568620 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:42:42.568631 | orchestrator | Thursday 11 September 2025 00:42:38 +0000 (0:00:00.173) 0:00:29.942 **** 2025-09-11 00:42:42.568642 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:42.568652 | orchestrator | 2025-09-11 00:42:42.568663 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:42:42.568674 | orchestrator | Thursday 11 September 2025 00:42:38 +0000 (0:00:00.424) 0:00:30.366 **** 2025-09-11 00:42:42.568685 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:42.568695 | orchestrator | 2025-09-11 00:42:42.568706 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-11 00:42:42.568717 | orchestrator | Thursday 11 September 2025 00:42:38 +0000 (0:00:00.193) 0:00:30.560 **** 2025-09-11 00:42:42.568728 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:42.568738 | orchestrator | 2025-09-11 00:42:42.568749 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-11 00:42:42.568760 | orchestrator | Thursday 11 September 2025 00:42:39 +0000 (0:00:00.113) 0:00:30.673 **** 2025-09-11 00:42:42.568771 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '344fe78f-9b90-543d-a55e-ac4ca1a09e29'}}) 2025-09-11 00:42:42.568782 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4b4178b7-2f3b-5f27-b2b6-7c3306310ac2'}}) 2025-09-11 00:42:42.568793 | orchestrator | 2025-09-11 00:42:42.568803 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-11 00:42:42.568814 | orchestrator | Thursday 11 September 2025 00:42:39 +0000 (0:00:00.173) 0:00:30.846 **** 2025-09-11 00:42:42.568825 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-344fe78f-9b90-543d-a55e-ac4ca1a09e29', 'data_vg': 'ceph-344fe78f-9b90-543d-a55e-ac4ca1a09e29'}) 2025-09-11 00:42:42.568837 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2', 'data_vg': 'ceph-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2'}) 2025-09-11 00:42:42.568848 | orchestrator | 2025-09-11 00:42:42.568859 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-11 00:42:42.568870 | orchestrator | Thursday 11 September 2025 00:42:41 +0000 (0:00:01.812) 0:00:32.658 **** 2025-09-11 00:42:42.568880 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-344fe78f-9b90-543d-a55e-ac4ca1a09e29', 'data_vg': 'ceph-344fe78f-9b90-543d-a55e-ac4ca1a09e29'})  2025-09-11 00:42:42.568892 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2', 'data_vg': 'ceph-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2'})  2025-09-11 00:42:42.568903 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:42.568914 | orchestrator | 2025-09-11 00:42:42.568924 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-11 00:42:42.568935 | orchestrator | Thursday 11 September 2025 00:42:41 +0000 (0:00:00.140) 0:00:32.799 **** 2025-09-11 00:42:42.568946 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-344fe78f-9b90-543d-a55e-ac4ca1a09e29', 'data_vg': 'ceph-344fe78f-9b90-543d-a55e-ac4ca1a09e29'}) 2025-09-11 00:42:42.568957 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2', 'data_vg': 'ceph-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2'}) 2025-09-11 00:42:42.568967 | orchestrator | 2025-09-11 00:42:42.568985 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-11 00:42:47.616139 | orchestrator | Thursday 11 September 2025 00:42:42 +0000 (0:00:01.328) 0:00:34.128 **** 2025-09-11 00:42:47.616261 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-344fe78f-9b90-543d-a55e-ac4ca1a09e29', 'data_vg': 'ceph-344fe78f-9b90-543d-a55e-ac4ca1a09e29'})  2025-09-11 00:42:47.616278 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2', 'data_vg': 'ceph-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2'})  2025-09-11 00:42:47.616290 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:47.616302 | orchestrator | 2025-09-11 00:42:47.616314 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-11 00:42:47.616325 | orchestrator | Thursday 11 September 2025 00:42:42 +0000 (0:00:00.157) 0:00:34.285 **** 2025-09-11 00:42:47.616336 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:47.616346 | orchestrator | 2025-09-11 00:42:47.616357 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-11 00:42:47.616368 | orchestrator | Thursday 11 September 2025 00:42:42 +0000 (0:00:00.117) 0:00:34.403 **** 2025-09-11 00:42:47.616379 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-344fe78f-9b90-543d-a55e-ac4ca1a09e29', 'data_vg': 'ceph-344fe78f-9b90-543d-a55e-ac4ca1a09e29'})  2025-09-11 00:42:47.616406 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2', 'data_vg': 'ceph-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2'})  2025-09-11 00:42:47.616417 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:47.616428 | orchestrator | 2025-09-11 00:42:47.616438 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-11 00:42:47.616449 | orchestrator | Thursday 11 September 2025 00:42:42 +0000 (0:00:00.137) 0:00:34.541 **** 2025-09-11 00:42:47.616459 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:47.616470 | orchestrator | 2025-09-11 00:42:47.616480 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-11 00:42:47.616491 | orchestrator | Thursday 11 September 2025 00:42:43 +0000 (0:00:00.109) 0:00:34.650 **** 2025-09-11 00:42:47.616502 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-344fe78f-9b90-543d-a55e-ac4ca1a09e29', 'data_vg': 'ceph-344fe78f-9b90-543d-a55e-ac4ca1a09e29'})  2025-09-11 00:42:47.616513 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2', 'data_vg': 'ceph-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2'})  2025-09-11 00:42:47.616523 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:47.616534 | orchestrator | 2025-09-11 00:42:47.616545 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-11 00:42:47.616555 | orchestrator | Thursday 11 September 2025 00:42:43 +0000 (0:00:00.136) 0:00:34.787 **** 2025-09-11 00:42:47.616572 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:47.616583 | orchestrator | 2025-09-11 00:42:47.616593 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-11 00:42:47.616604 | orchestrator | Thursday 11 September 2025 00:42:43 +0000 (0:00:00.276) 0:00:35.063 **** 2025-09-11 00:42:47.616614 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-344fe78f-9b90-543d-a55e-ac4ca1a09e29', 'data_vg': 'ceph-344fe78f-9b90-543d-a55e-ac4ca1a09e29'})  2025-09-11 00:42:47.616625 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2', 'data_vg': 'ceph-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2'})  2025-09-11 00:42:47.616636 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:47.616647 | orchestrator | 2025-09-11 00:42:47.616659 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-11 00:42:47.616671 | orchestrator | Thursday 11 September 2025 00:42:43 +0000 (0:00:00.133) 0:00:35.197 **** 2025-09-11 00:42:47.616683 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:42:47.616697 | orchestrator | 2025-09-11 00:42:47.616709 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-11 00:42:47.616721 | orchestrator | Thursday 11 September 2025 00:42:43 +0000 (0:00:00.128) 0:00:35.325 **** 2025-09-11 00:42:47.616741 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-344fe78f-9b90-543d-a55e-ac4ca1a09e29', 'data_vg': 'ceph-344fe78f-9b90-543d-a55e-ac4ca1a09e29'})  2025-09-11 00:42:47.616754 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2', 'data_vg': 'ceph-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2'})  2025-09-11 00:42:47.616767 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:47.616779 | orchestrator | 2025-09-11 00:42:47.616791 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-11 00:42:47.616804 | orchestrator | Thursday 11 September 2025 00:42:43 +0000 (0:00:00.154) 0:00:35.479 **** 2025-09-11 00:42:47.616817 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-344fe78f-9b90-543d-a55e-ac4ca1a09e29', 'data_vg': 'ceph-344fe78f-9b90-543d-a55e-ac4ca1a09e29'})  2025-09-11 00:42:47.616829 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2', 'data_vg': 'ceph-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2'})  2025-09-11 00:42:47.616840 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:47.616850 | orchestrator | 2025-09-11 00:42:47.616861 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-11 00:42:47.616871 | orchestrator | Thursday 11 September 2025 00:42:44 +0000 (0:00:00.143) 0:00:35.623 **** 2025-09-11 00:42:47.616898 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-344fe78f-9b90-543d-a55e-ac4ca1a09e29', 'data_vg': 'ceph-344fe78f-9b90-543d-a55e-ac4ca1a09e29'})  2025-09-11 00:42:47.616909 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2', 'data_vg': 'ceph-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2'})  2025-09-11 00:42:47.616920 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:47.616931 | orchestrator | 2025-09-11 00:42:47.616942 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-11 00:42:47.616952 | orchestrator | Thursday 11 September 2025 00:42:44 +0000 (0:00:00.125) 0:00:35.749 **** 2025-09-11 00:42:47.616963 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:47.616973 | orchestrator | 2025-09-11 00:42:47.616984 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-11 00:42:47.616995 | orchestrator | Thursday 11 September 2025 00:42:44 +0000 (0:00:00.107) 0:00:35.856 **** 2025-09-11 00:42:47.617028 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:47.617039 | orchestrator | 2025-09-11 00:42:47.617050 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-11 00:42:47.617060 | orchestrator | Thursday 11 September 2025 00:42:44 +0000 (0:00:00.125) 0:00:35.982 **** 2025-09-11 00:42:47.617071 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:47.617081 | orchestrator | 2025-09-11 00:42:47.617092 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-11 00:42:47.617103 | orchestrator | Thursday 11 September 2025 00:42:44 +0000 (0:00:00.120) 0:00:36.102 **** 2025-09-11 00:42:47.617113 | orchestrator | ok: [testbed-node-4] => { 2025-09-11 00:42:47.617124 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-11 00:42:47.617135 | orchestrator | } 2025-09-11 00:42:47.617145 | orchestrator | 2025-09-11 00:42:47.617156 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-11 00:42:47.617167 | orchestrator | Thursday 11 September 2025 00:42:44 +0000 (0:00:00.127) 0:00:36.230 **** 2025-09-11 00:42:47.617177 | orchestrator | ok: [testbed-node-4] => { 2025-09-11 00:42:47.617188 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-11 00:42:47.617198 | orchestrator | } 2025-09-11 00:42:47.617209 | orchestrator | 2025-09-11 00:42:47.617220 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-11 00:42:47.617230 | orchestrator | Thursday 11 September 2025 00:42:44 +0000 (0:00:00.130) 0:00:36.360 **** 2025-09-11 00:42:47.617241 | orchestrator | ok: [testbed-node-4] => { 2025-09-11 00:42:47.617252 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-11 00:42:47.617271 | orchestrator | } 2025-09-11 00:42:47.617282 | orchestrator | 2025-09-11 00:42:47.617293 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-11 00:42:47.617303 | orchestrator | Thursday 11 September 2025 00:42:44 +0000 (0:00:00.129) 0:00:36.490 **** 2025-09-11 00:42:47.617314 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:42:47.617325 | orchestrator | 2025-09-11 00:42:47.617336 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-11 00:42:47.617346 | orchestrator | Thursday 11 September 2025 00:42:45 +0000 (0:00:00.668) 0:00:37.158 **** 2025-09-11 00:42:47.617362 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:42:47.617373 | orchestrator | 2025-09-11 00:42:47.617383 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-11 00:42:47.617394 | orchestrator | Thursday 11 September 2025 00:42:46 +0000 (0:00:00.544) 0:00:37.702 **** 2025-09-11 00:42:47.617404 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:42:47.617415 | orchestrator | 2025-09-11 00:42:47.617425 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-11 00:42:47.617436 | orchestrator | Thursday 11 September 2025 00:42:46 +0000 (0:00:00.526) 0:00:38.228 **** 2025-09-11 00:42:47.617447 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:42:47.617457 | orchestrator | 2025-09-11 00:42:47.617468 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-11 00:42:47.617478 | orchestrator | Thursday 11 September 2025 00:42:46 +0000 (0:00:00.120) 0:00:38.349 **** 2025-09-11 00:42:47.617489 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:47.617499 | orchestrator | 2025-09-11 00:42:47.617510 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-11 00:42:47.617520 | orchestrator | Thursday 11 September 2025 00:42:46 +0000 (0:00:00.095) 0:00:38.444 **** 2025-09-11 00:42:47.617531 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:47.617541 | orchestrator | 2025-09-11 00:42:47.617552 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-11 00:42:47.617562 | orchestrator | Thursday 11 September 2025 00:42:46 +0000 (0:00:00.101) 0:00:38.545 **** 2025-09-11 00:42:47.617573 | orchestrator | ok: [testbed-node-4] => { 2025-09-11 00:42:47.617584 | orchestrator |  "vgs_report": { 2025-09-11 00:42:47.617595 | orchestrator |  "vg": [] 2025-09-11 00:42:47.617606 | orchestrator |  } 2025-09-11 00:42:47.617617 | orchestrator | } 2025-09-11 00:42:47.617628 | orchestrator | 2025-09-11 00:42:47.617638 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-11 00:42:47.617649 | orchestrator | Thursday 11 September 2025 00:42:47 +0000 (0:00:00.119) 0:00:38.665 **** 2025-09-11 00:42:47.617659 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:47.617670 | orchestrator | 2025-09-11 00:42:47.617680 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-11 00:42:47.617691 | orchestrator | Thursday 11 September 2025 00:42:47 +0000 (0:00:00.136) 0:00:38.801 **** 2025-09-11 00:42:47.617702 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:47.617712 | orchestrator | 2025-09-11 00:42:47.617723 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-11 00:42:47.617733 | orchestrator | Thursday 11 September 2025 00:42:47 +0000 (0:00:00.110) 0:00:38.911 **** 2025-09-11 00:42:47.617744 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:47.617755 | orchestrator | 2025-09-11 00:42:47.617765 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-11 00:42:47.617776 | orchestrator | Thursday 11 September 2025 00:42:47 +0000 (0:00:00.142) 0:00:39.053 **** 2025-09-11 00:42:47.617787 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:47.617797 | orchestrator | 2025-09-11 00:42:47.617808 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-11 00:42:47.617826 | orchestrator | Thursday 11 September 2025 00:42:47 +0000 (0:00:00.122) 0:00:39.176 **** 2025-09-11 00:42:51.846050 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:51.846119 | orchestrator | 2025-09-11 00:42:51.846145 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-11 00:42:51.846154 | orchestrator | Thursday 11 September 2025 00:42:47 +0000 (0:00:00.126) 0:00:39.303 **** 2025-09-11 00:42:51.846160 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:51.846167 | orchestrator | 2025-09-11 00:42:51.846174 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-11 00:42:51.846180 | orchestrator | Thursday 11 September 2025 00:42:47 +0000 (0:00:00.239) 0:00:39.543 **** 2025-09-11 00:42:51.846186 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:51.846193 | orchestrator | 2025-09-11 00:42:51.846200 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-11 00:42:51.846206 | orchestrator | Thursday 11 September 2025 00:42:48 +0000 (0:00:00.130) 0:00:39.673 **** 2025-09-11 00:42:51.846213 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:51.846219 | orchestrator | 2025-09-11 00:42:51.846226 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-11 00:42:51.846232 | orchestrator | Thursday 11 September 2025 00:42:48 +0000 (0:00:00.121) 0:00:39.795 **** 2025-09-11 00:42:51.846238 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:51.846244 | orchestrator | 2025-09-11 00:42:51.846250 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-11 00:42:51.846256 | orchestrator | Thursday 11 September 2025 00:42:48 +0000 (0:00:00.122) 0:00:39.917 **** 2025-09-11 00:42:51.846262 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:51.846269 | orchestrator | 2025-09-11 00:42:51.846275 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-11 00:42:51.846281 | orchestrator | Thursday 11 September 2025 00:42:48 +0000 (0:00:00.126) 0:00:40.044 **** 2025-09-11 00:42:51.846287 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:51.846294 | orchestrator | 2025-09-11 00:42:51.846301 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-11 00:42:51.846307 | orchestrator | Thursday 11 September 2025 00:42:48 +0000 (0:00:00.127) 0:00:40.171 **** 2025-09-11 00:42:51.846313 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:51.846320 | orchestrator | 2025-09-11 00:42:51.846327 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-11 00:42:51.846334 | orchestrator | Thursday 11 September 2025 00:42:48 +0000 (0:00:00.127) 0:00:40.299 **** 2025-09-11 00:42:51.846341 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:51.846348 | orchestrator | 2025-09-11 00:42:51.846355 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-11 00:42:51.846362 | orchestrator | Thursday 11 September 2025 00:42:48 +0000 (0:00:00.126) 0:00:40.426 **** 2025-09-11 00:42:51.846369 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:51.846375 | orchestrator | 2025-09-11 00:42:51.846382 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-11 00:42:51.846389 | orchestrator | Thursday 11 September 2025 00:42:48 +0000 (0:00:00.132) 0:00:40.558 **** 2025-09-11 00:42:51.846406 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-344fe78f-9b90-543d-a55e-ac4ca1a09e29', 'data_vg': 'ceph-344fe78f-9b90-543d-a55e-ac4ca1a09e29'})  2025-09-11 00:42:51.846414 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2', 'data_vg': 'ceph-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2'})  2025-09-11 00:42:51.846422 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:51.846429 | orchestrator | 2025-09-11 00:42:51.846436 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-11 00:42:51.846443 | orchestrator | Thursday 11 September 2025 00:42:49 +0000 (0:00:00.145) 0:00:40.704 **** 2025-09-11 00:42:51.846450 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-344fe78f-9b90-543d-a55e-ac4ca1a09e29', 'data_vg': 'ceph-344fe78f-9b90-543d-a55e-ac4ca1a09e29'})  2025-09-11 00:42:51.846457 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2', 'data_vg': 'ceph-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2'})  2025-09-11 00:42:51.846471 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:51.846479 | orchestrator | 2025-09-11 00:42:51.846486 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-11 00:42:51.846493 | orchestrator | Thursday 11 September 2025 00:42:49 +0000 (0:00:00.133) 0:00:40.838 **** 2025-09-11 00:42:51.846500 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-344fe78f-9b90-543d-a55e-ac4ca1a09e29', 'data_vg': 'ceph-344fe78f-9b90-543d-a55e-ac4ca1a09e29'})  2025-09-11 00:42:51.846509 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2', 'data_vg': 'ceph-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2'})  2025-09-11 00:42:51.846516 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:51.846523 | orchestrator | 2025-09-11 00:42:51.846530 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-11 00:42:51.846537 | orchestrator | Thursday 11 September 2025 00:42:49 +0000 (0:00:00.146) 0:00:40.984 **** 2025-09-11 00:42:51.846543 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-344fe78f-9b90-543d-a55e-ac4ca1a09e29', 'data_vg': 'ceph-344fe78f-9b90-543d-a55e-ac4ca1a09e29'})  2025-09-11 00:42:51.846551 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2', 'data_vg': 'ceph-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2'})  2025-09-11 00:42:51.846557 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:51.846563 | orchestrator | 2025-09-11 00:42:51.846570 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-11 00:42:51.846589 | orchestrator | Thursday 11 September 2025 00:42:49 +0000 (0:00:00.272) 0:00:41.257 **** 2025-09-11 00:42:51.846596 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-344fe78f-9b90-543d-a55e-ac4ca1a09e29', 'data_vg': 'ceph-344fe78f-9b90-543d-a55e-ac4ca1a09e29'})  2025-09-11 00:42:51.846602 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2', 'data_vg': 'ceph-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2'})  2025-09-11 00:42:51.846609 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:51.846616 | orchestrator | 2025-09-11 00:42:51.846623 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-11 00:42:51.846630 | orchestrator | Thursday 11 September 2025 00:42:49 +0000 (0:00:00.142) 0:00:41.399 **** 2025-09-11 00:42:51.846637 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-344fe78f-9b90-543d-a55e-ac4ca1a09e29', 'data_vg': 'ceph-344fe78f-9b90-543d-a55e-ac4ca1a09e29'})  2025-09-11 00:42:51.846645 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2', 'data_vg': 'ceph-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2'})  2025-09-11 00:42:51.846652 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:51.846660 | orchestrator | 2025-09-11 00:42:51.846667 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-11 00:42:51.846674 | orchestrator | Thursday 11 September 2025 00:42:49 +0000 (0:00:00.139) 0:00:41.538 **** 2025-09-11 00:42:51.846681 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-344fe78f-9b90-543d-a55e-ac4ca1a09e29', 'data_vg': 'ceph-344fe78f-9b90-543d-a55e-ac4ca1a09e29'})  2025-09-11 00:42:51.846689 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2', 'data_vg': 'ceph-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2'})  2025-09-11 00:42:51.846696 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:51.846703 | orchestrator | 2025-09-11 00:42:51.846710 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-11 00:42:51.846717 | orchestrator | Thursday 11 September 2025 00:42:50 +0000 (0:00:00.145) 0:00:41.684 **** 2025-09-11 00:42:51.846724 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-344fe78f-9b90-543d-a55e-ac4ca1a09e29', 'data_vg': 'ceph-344fe78f-9b90-543d-a55e-ac4ca1a09e29'})  2025-09-11 00:42:51.846736 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2', 'data_vg': 'ceph-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2'})  2025-09-11 00:42:51.846744 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:51.846751 | orchestrator | 2025-09-11 00:42:51.846759 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-11 00:42:51.846791 | orchestrator | Thursday 11 September 2025 00:42:50 +0000 (0:00:00.130) 0:00:41.814 **** 2025-09-11 00:42:51.846799 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:42:51.846806 | orchestrator | 2025-09-11 00:42:51.846814 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-11 00:42:51.846822 | orchestrator | Thursday 11 September 2025 00:42:50 +0000 (0:00:00.498) 0:00:42.313 **** 2025-09-11 00:42:51.846829 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:42:51.846836 | orchestrator | 2025-09-11 00:42:51.846843 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-11 00:42:51.846850 | orchestrator | Thursday 11 September 2025 00:42:51 +0000 (0:00:00.505) 0:00:42.819 **** 2025-09-11 00:42:51.846858 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:42:51.846865 | orchestrator | 2025-09-11 00:42:51.846872 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-11 00:42:51.846879 | orchestrator | Thursday 11 September 2025 00:42:51 +0000 (0:00:00.148) 0:00:42.968 **** 2025-09-11 00:42:51.846887 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-344fe78f-9b90-543d-a55e-ac4ca1a09e29', 'vg_name': 'ceph-344fe78f-9b90-543d-a55e-ac4ca1a09e29'}) 2025-09-11 00:42:51.846895 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2', 'vg_name': 'ceph-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2'}) 2025-09-11 00:42:51.846903 | orchestrator | 2025-09-11 00:42:51.846910 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-11 00:42:51.846918 | orchestrator | Thursday 11 September 2025 00:42:51 +0000 (0:00:00.160) 0:00:43.128 **** 2025-09-11 00:42:51.846926 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-344fe78f-9b90-543d-a55e-ac4ca1a09e29', 'data_vg': 'ceph-344fe78f-9b90-543d-a55e-ac4ca1a09e29'})  2025-09-11 00:42:51.846933 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2', 'data_vg': 'ceph-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2'})  2025-09-11 00:42:51.846940 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:51.846948 | orchestrator | 2025-09-11 00:42:51.846955 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-11 00:42:51.846961 | orchestrator | Thursday 11 September 2025 00:42:51 +0000 (0:00:00.138) 0:00:43.266 **** 2025-09-11 00:42:51.846968 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-344fe78f-9b90-543d-a55e-ac4ca1a09e29', 'data_vg': 'ceph-344fe78f-9b90-543d-a55e-ac4ca1a09e29'})  2025-09-11 00:42:51.846976 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2', 'data_vg': 'ceph-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2'})  2025-09-11 00:42:51.846987 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:57.716900 | orchestrator | 2025-09-11 00:42:57.717051 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-11 00:42:57.717069 | orchestrator | Thursday 11 September 2025 00:42:51 +0000 (0:00:00.139) 0:00:43.406 **** 2025-09-11 00:42:57.717083 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-344fe78f-9b90-543d-a55e-ac4ca1a09e29', 'data_vg': 'ceph-344fe78f-9b90-543d-a55e-ac4ca1a09e29'})  2025-09-11 00:42:57.717096 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2', 'data_vg': 'ceph-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2'})  2025-09-11 00:42:57.717108 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:42:57.717119 | orchestrator | 2025-09-11 00:42:57.717131 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-11 00:42:57.717142 | orchestrator | Thursday 11 September 2025 00:42:51 +0000 (0:00:00.140) 0:00:43.546 **** 2025-09-11 00:42:57.717179 | orchestrator | ok: [testbed-node-4] => { 2025-09-11 00:42:57.717192 | orchestrator |  "lvm_report": { 2025-09-11 00:42:57.717205 | orchestrator |  "lv": [ 2025-09-11 00:42:57.717216 | orchestrator |  { 2025-09-11 00:42:57.717227 | orchestrator |  "lv_name": "osd-block-344fe78f-9b90-543d-a55e-ac4ca1a09e29", 2025-09-11 00:42:57.717239 | orchestrator |  "vg_name": "ceph-344fe78f-9b90-543d-a55e-ac4ca1a09e29" 2025-09-11 00:42:57.717250 | orchestrator |  }, 2025-09-11 00:42:57.717260 | orchestrator |  { 2025-09-11 00:42:57.717271 | orchestrator |  "lv_name": "osd-block-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2", 2025-09-11 00:42:57.717282 | orchestrator |  "vg_name": "ceph-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2" 2025-09-11 00:42:57.717292 | orchestrator |  } 2025-09-11 00:42:57.717303 | orchestrator |  ], 2025-09-11 00:42:57.717314 | orchestrator |  "pv": [ 2025-09-11 00:42:57.717324 | orchestrator |  { 2025-09-11 00:42:57.717335 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-11 00:42:57.717346 | orchestrator |  "vg_name": "ceph-344fe78f-9b90-543d-a55e-ac4ca1a09e29" 2025-09-11 00:42:57.717356 | orchestrator |  }, 2025-09-11 00:42:57.717367 | orchestrator |  { 2025-09-11 00:42:57.717377 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-11 00:42:57.717388 | orchestrator |  "vg_name": "ceph-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2" 2025-09-11 00:42:57.717398 | orchestrator |  } 2025-09-11 00:42:57.717411 | orchestrator |  ] 2025-09-11 00:42:57.717423 | orchestrator |  } 2025-09-11 00:42:57.717435 | orchestrator | } 2025-09-11 00:42:57.717448 | orchestrator | 2025-09-11 00:42:57.717460 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-11 00:42:57.717473 | orchestrator | 2025-09-11 00:42:57.717485 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-11 00:42:57.717497 | orchestrator | Thursday 11 September 2025 00:42:52 +0000 (0:00:00.377) 0:00:43.924 **** 2025-09-11 00:42:57.717509 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-11 00:42:57.717521 | orchestrator | 2025-09-11 00:42:57.717550 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-11 00:42:57.717563 | orchestrator | Thursday 11 September 2025 00:42:52 +0000 (0:00:00.240) 0:00:44.165 **** 2025-09-11 00:42:57.717575 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:42:57.717588 | orchestrator | 2025-09-11 00:42:57.717600 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:57.717612 | orchestrator | Thursday 11 September 2025 00:42:52 +0000 (0:00:00.227) 0:00:44.392 **** 2025-09-11 00:42:57.717624 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-11 00:42:57.717636 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-11 00:42:57.717648 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-11 00:42:57.717660 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-11 00:42:57.717672 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-11 00:42:57.717684 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-11 00:42:57.717696 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-11 00:42:57.717708 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-11 00:42:57.717720 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-11 00:42:57.717732 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-11 00:42:57.717744 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-11 00:42:57.717766 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-11 00:42:57.717777 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-11 00:42:57.717787 | orchestrator | 2025-09-11 00:42:57.717798 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:57.717808 | orchestrator | Thursday 11 September 2025 00:42:53 +0000 (0:00:00.411) 0:00:44.804 **** 2025-09-11 00:42:57.717819 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:42:57.717835 | orchestrator | 2025-09-11 00:42:57.717846 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:57.717856 | orchestrator | Thursday 11 September 2025 00:42:53 +0000 (0:00:00.196) 0:00:45.000 **** 2025-09-11 00:42:57.717867 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:42:57.717878 | orchestrator | 2025-09-11 00:42:57.717888 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:57.717917 | orchestrator | Thursday 11 September 2025 00:42:53 +0000 (0:00:00.198) 0:00:45.199 **** 2025-09-11 00:42:57.717928 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:42:57.717939 | orchestrator | 2025-09-11 00:42:57.717950 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:57.717960 | orchestrator | Thursday 11 September 2025 00:42:53 +0000 (0:00:00.192) 0:00:45.391 **** 2025-09-11 00:42:57.717971 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:42:57.717981 | orchestrator | 2025-09-11 00:42:57.718012 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:57.718081 | orchestrator | Thursday 11 September 2025 00:42:54 +0000 (0:00:00.202) 0:00:45.594 **** 2025-09-11 00:42:57.718092 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:42:57.718103 | orchestrator | 2025-09-11 00:42:57.718114 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:57.718124 | orchestrator | Thursday 11 September 2025 00:42:54 +0000 (0:00:00.204) 0:00:45.799 **** 2025-09-11 00:42:57.718135 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:42:57.718146 | orchestrator | 2025-09-11 00:42:57.718156 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:57.718167 | orchestrator | Thursday 11 September 2025 00:42:54 +0000 (0:00:00.600) 0:00:46.400 **** 2025-09-11 00:42:57.718177 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:42:57.718188 | orchestrator | 2025-09-11 00:42:57.718199 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:57.718209 | orchestrator | Thursday 11 September 2025 00:42:55 +0000 (0:00:00.194) 0:00:46.594 **** 2025-09-11 00:42:57.718220 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:42:57.718230 | orchestrator | 2025-09-11 00:42:57.718241 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:57.718251 | orchestrator | Thursday 11 September 2025 00:42:55 +0000 (0:00:00.210) 0:00:46.804 **** 2025-09-11 00:42:57.718262 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc) 2025-09-11 00:42:57.718274 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc) 2025-09-11 00:42:57.718285 | orchestrator | 2025-09-11 00:42:57.718296 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:57.718306 | orchestrator | Thursday 11 September 2025 00:42:55 +0000 (0:00:00.414) 0:00:47.219 **** 2025-09-11 00:42:57.718322 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f923a2d7-e50a-4a10-a63c-46b2772477f3) 2025-09-11 00:42:57.718341 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f923a2d7-e50a-4a10-a63c-46b2772477f3) 2025-09-11 00:42:57.718354 | orchestrator | 2025-09-11 00:42:57.718364 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:57.718375 | orchestrator | Thursday 11 September 2025 00:42:56 +0000 (0:00:00.419) 0:00:47.638 **** 2025-09-11 00:42:57.718402 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_75033e65-6f8e-4260-8d0b-0f414b2e283a) 2025-09-11 00:42:57.718413 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_75033e65-6f8e-4260-8d0b-0f414b2e283a) 2025-09-11 00:42:57.718424 | orchestrator | 2025-09-11 00:42:57.718435 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:57.718445 | orchestrator | Thursday 11 September 2025 00:42:56 +0000 (0:00:00.427) 0:00:48.066 **** 2025-09-11 00:42:57.718455 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7e2337cd-4ca4-43ee-9815-6c22aae7aa7a) 2025-09-11 00:42:57.718466 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7e2337cd-4ca4-43ee-9815-6c22aae7aa7a) 2025-09-11 00:42:57.718477 | orchestrator | 2025-09-11 00:42:57.718487 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-11 00:42:57.718498 | orchestrator | Thursday 11 September 2025 00:42:56 +0000 (0:00:00.435) 0:00:48.501 **** 2025-09-11 00:42:57.718508 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-11 00:42:57.718519 | orchestrator | 2025-09-11 00:42:57.718530 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:42:57.718540 | orchestrator | Thursday 11 September 2025 00:42:57 +0000 (0:00:00.341) 0:00:48.843 **** 2025-09-11 00:42:57.718550 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-11 00:42:57.718561 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-11 00:42:57.718571 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-11 00:42:57.718582 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-11 00:42:57.718592 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-11 00:42:57.718603 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-11 00:42:57.718613 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-11 00:42:57.718627 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-11 00:42:57.718644 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-11 00:42:57.718656 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-11 00:42:57.718669 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-11 00:42:57.718696 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-11 00:43:06.557436 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-11 00:43:06.557517 | orchestrator | 2025-09-11 00:43:06.557528 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:43:06.557536 | orchestrator | Thursday 11 September 2025 00:42:57 +0000 (0:00:00.427) 0:00:49.270 **** 2025-09-11 00:43:06.557543 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:06.557550 | orchestrator | 2025-09-11 00:43:06.557556 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:43:06.557563 | orchestrator | Thursday 11 September 2025 00:42:57 +0000 (0:00:00.203) 0:00:49.474 **** 2025-09-11 00:43:06.557569 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:06.557575 | orchestrator | 2025-09-11 00:43:06.557582 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:43:06.557588 | orchestrator | Thursday 11 September 2025 00:42:58 +0000 (0:00:00.195) 0:00:49.670 **** 2025-09-11 00:43:06.557594 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:06.557600 | orchestrator | 2025-09-11 00:43:06.557607 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:43:06.557633 | orchestrator | Thursday 11 September 2025 00:42:58 +0000 (0:00:00.606) 0:00:50.276 **** 2025-09-11 00:43:06.557639 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:06.557646 | orchestrator | 2025-09-11 00:43:06.557652 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:43:06.557658 | orchestrator | Thursday 11 September 2025 00:42:58 +0000 (0:00:00.216) 0:00:50.492 **** 2025-09-11 00:43:06.557664 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:06.557670 | orchestrator | 2025-09-11 00:43:06.557676 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:43:06.557683 | orchestrator | Thursday 11 September 2025 00:42:59 +0000 (0:00:00.202) 0:00:50.694 **** 2025-09-11 00:43:06.557689 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:06.557695 | orchestrator | 2025-09-11 00:43:06.557701 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:43:06.557707 | orchestrator | Thursday 11 September 2025 00:42:59 +0000 (0:00:00.192) 0:00:50.887 **** 2025-09-11 00:43:06.557713 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:06.557719 | orchestrator | 2025-09-11 00:43:06.557726 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:43:06.557732 | orchestrator | Thursday 11 September 2025 00:42:59 +0000 (0:00:00.204) 0:00:51.092 **** 2025-09-11 00:43:06.557738 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:06.557744 | orchestrator | 2025-09-11 00:43:06.557750 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:43:06.557756 | orchestrator | Thursday 11 September 2025 00:42:59 +0000 (0:00:00.201) 0:00:51.293 **** 2025-09-11 00:43:06.557763 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-11 00:43:06.557769 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-11 00:43:06.557776 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-11 00:43:06.557783 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-11 00:43:06.557789 | orchestrator | 2025-09-11 00:43:06.557795 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:43:06.557801 | orchestrator | Thursday 11 September 2025 00:43:00 +0000 (0:00:00.616) 0:00:51.910 **** 2025-09-11 00:43:06.557807 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:06.557814 | orchestrator | 2025-09-11 00:43:06.557820 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:43:06.557826 | orchestrator | Thursday 11 September 2025 00:43:00 +0000 (0:00:00.197) 0:00:52.108 **** 2025-09-11 00:43:06.557832 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:06.557838 | orchestrator | 2025-09-11 00:43:06.557845 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:43:06.557851 | orchestrator | Thursday 11 September 2025 00:43:00 +0000 (0:00:00.200) 0:00:52.309 **** 2025-09-11 00:43:06.557857 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:06.557863 | orchestrator | 2025-09-11 00:43:06.557869 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-11 00:43:06.557875 | orchestrator | Thursday 11 September 2025 00:43:00 +0000 (0:00:00.196) 0:00:52.505 **** 2025-09-11 00:43:06.557881 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:06.557888 | orchestrator | 2025-09-11 00:43:06.557894 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-11 00:43:06.557900 | orchestrator | Thursday 11 September 2025 00:43:01 +0000 (0:00:00.231) 0:00:52.736 **** 2025-09-11 00:43:06.557906 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:06.557912 | orchestrator | 2025-09-11 00:43:06.557918 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-11 00:43:06.557924 | orchestrator | Thursday 11 September 2025 00:43:01 +0000 (0:00:00.347) 0:00:53.084 **** 2025-09-11 00:43:06.557930 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1fcfbff8-db79-5f3f-a505-ec8e716f38d6'}}) 2025-09-11 00:43:06.557937 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8a3e2512-7b8b-5f78-845d-17a09314c972'}}) 2025-09-11 00:43:06.557948 | orchestrator | 2025-09-11 00:43:06.557954 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-11 00:43:06.557961 | orchestrator | Thursday 11 September 2025 00:43:01 +0000 (0:00:00.190) 0:00:53.275 **** 2025-09-11 00:43:06.557967 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1fcfbff8-db79-5f3f-a505-ec8e716f38d6', 'data_vg': 'ceph-1fcfbff8-db79-5f3f-a505-ec8e716f38d6'}) 2025-09-11 00:43:06.557974 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8a3e2512-7b8b-5f78-845d-17a09314c972', 'data_vg': 'ceph-8a3e2512-7b8b-5f78-845d-17a09314c972'}) 2025-09-11 00:43:06.557980 | orchestrator | 2025-09-11 00:43:06.558010 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-11 00:43:06.558071 | orchestrator | Thursday 11 September 2025 00:43:03 +0000 (0:00:01.838) 0:00:55.113 **** 2025-09-11 00:43:06.558079 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fcfbff8-db79-5f3f-a505-ec8e716f38d6', 'data_vg': 'ceph-1fcfbff8-db79-5f3f-a505-ec8e716f38d6'})  2025-09-11 00:43:06.558088 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8a3e2512-7b8b-5f78-845d-17a09314c972', 'data_vg': 'ceph-8a3e2512-7b8b-5f78-845d-17a09314c972'})  2025-09-11 00:43:06.558095 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:06.558102 | orchestrator | 2025-09-11 00:43:06.558109 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-11 00:43:06.558117 | orchestrator | Thursday 11 September 2025 00:43:03 +0000 (0:00:00.150) 0:00:55.264 **** 2025-09-11 00:43:06.558145 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1fcfbff8-db79-5f3f-a505-ec8e716f38d6', 'data_vg': 'ceph-1fcfbff8-db79-5f3f-a505-ec8e716f38d6'}) 2025-09-11 00:43:06.558166 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8a3e2512-7b8b-5f78-845d-17a09314c972', 'data_vg': 'ceph-8a3e2512-7b8b-5f78-845d-17a09314c972'}) 2025-09-11 00:43:06.558175 | orchestrator | 2025-09-11 00:43:06.558182 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-11 00:43:06.558189 | orchestrator | Thursday 11 September 2025 00:43:05 +0000 (0:00:01.346) 0:00:56.611 **** 2025-09-11 00:43:06.558196 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fcfbff8-db79-5f3f-a505-ec8e716f38d6', 'data_vg': 'ceph-1fcfbff8-db79-5f3f-a505-ec8e716f38d6'})  2025-09-11 00:43:06.558205 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8a3e2512-7b8b-5f78-845d-17a09314c972', 'data_vg': 'ceph-8a3e2512-7b8b-5f78-845d-17a09314c972'})  2025-09-11 00:43:06.558211 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:06.558219 | orchestrator | 2025-09-11 00:43:06.558226 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-11 00:43:06.558234 | orchestrator | Thursday 11 September 2025 00:43:05 +0000 (0:00:00.150) 0:00:56.761 **** 2025-09-11 00:43:06.558241 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:06.558248 | orchestrator | 2025-09-11 00:43:06.558255 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-11 00:43:06.558262 | orchestrator | Thursday 11 September 2025 00:43:05 +0000 (0:00:00.135) 0:00:56.896 **** 2025-09-11 00:43:06.558270 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fcfbff8-db79-5f3f-a505-ec8e716f38d6', 'data_vg': 'ceph-1fcfbff8-db79-5f3f-a505-ec8e716f38d6'})  2025-09-11 00:43:06.558281 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8a3e2512-7b8b-5f78-845d-17a09314c972', 'data_vg': 'ceph-8a3e2512-7b8b-5f78-845d-17a09314c972'})  2025-09-11 00:43:06.558289 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:06.558296 | orchestrator | 2025-09-11 00:43:06.558303 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-11 00:43:06.558310 | orchestrator | Thursday 11 September 2025 00:43:05 +0000 (0:00:00.149) 0:00:57.046 **** 2025-09-11 00:43:06.558318 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:06.558333 | orchestrator | 2025-09-11 00:43:06.558340 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-11 00:43:06.558347 | orchestrator | Thursday 11 September 2025 00:43:05 +0000 (0:00:00.135) 0:00:57.181 **** 2025-09-11 00:43:06.558353 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fcfbff8-db79-5f3f-a505-ec8e716f38d6', 'data_vg': 'ceph-1fcfbff8-db79-5f3f-a505-ec8e716f38d6'})  2025-09-11 00:43:06.558359 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8a3e2512-7b8b-5f78-845d-17a09314c972', 'data_vg': 'ceph-8a3e2512-7b8b-5f78-845d-17a09314c972'})  2025-09-11 00:43:06.558365 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:06.558371 | orchestrator | 2025-09-11 00:43:06.558377 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-11 00:43:06.558384 | orchestrator | Thursday 11 September 2025 00:43:05 +0000 (0:00:00.146) 0:00:57.328 **** 2025-09-11 00:43:06.558390 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:06.558396 | orchestrator | 2025-09-11 00:43:06.558402 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-11 00:43:06.558408 | orchestrator | Thursday 11 September 2025 00:43:05 +0000 (0:00:00.141) 0:00:57.470 **** 2025-09-11 00:43:06.558414 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fcfbff8-db79-5f3f-a505-ec8e716f38d6', 'data_vg': 'ceph-1fcfbff8-db79-5f3f-a505-ec8e716f38d6'})  2025-09-11 00:43:06.558420 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8a3e2512-7b8b-5f78-845d-17a09314c972', 'data_vg': 'ceph-8a3e2512-7b8b-5f78-845d-17a09314c972'})  2025-09-11 00:43:06.558426 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:06.558433 | orchestrator | 2025-09-11 00:43:06.558439 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-11 00:43:06.558445 | orchestrator | Thursday 11 September 2025 00:43:06 +0000 (0:00:00.156) 0:00:57.627 **** 2025-09-11 00:43:06.558451 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:43:06.558457 | orchestrator | 2025-09-11 00:43:06.558464 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-11 00:43:06.558470 | orchestrator | Thursday 11 September 2025 00:43:06 +0000 (0:00:00.331) 0:00:57.959 **** 2025-09-11 00:43:06.558481 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fcfbff8-db79-5f3f-a505-ec8e716f38d6', 'data_vg': 'ceph-1fcfbff8-db79-5f3f-a505-ec8e716f38d6'})  2025-09-11 00:43:13.042586 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8a3e2512-7b8b-5f78-845d-17a09314c972', 'data_vg': 'ceph-8a3e2512-7b8b-5f78-845d-17a09314c972'})  2025-09-11 00:43:13.042667 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:13.042675 | orchestrator | 2025-09-11 00:43:13.042682 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-11 00:43:13.042688 | orchestrator | Thursday 11 September 2025 00:43:06 +0000 (0:00:00.160) 0:00:58.119 **** 2025-09-11 00:43:13.042693 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fcfbff8-db79-5f3f-a505-ec8e716f38d6', 'data_vg': 'ceph-1fcfbff8-db79-5f3f-a505-ec8e716f38d6'})  2025-09-11 00:43:13.042699 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8a3e2512-7b8b-5f78-845d-17a09314c972', 'data_vg': 'ceph-8a3e2512-7b8b-5f78-845d-17a09314c972'})  2025-09-11 00:43:13.042704 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:13.042709 | orchestrator | 2025-09-11 00:43:13.042714 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-11 00:43:13.042719 | orchestrator | Thursday 11 September 2025 00:43:06 +0000 (0:00:00.144) 0:00:58.263 **** 2025-09-11 00:43:13.042724 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fcfbff8-db79-5f3f-a505-ec8e716f38d6', 'data_vg': 'ceph-1fcfbff8-db79-5f3f-a505-ec8e716f38d6'})  2025-09-11 00:43:13.042729 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8a3e2512-7b8b-5f78-845d-17a09314c972', 'data_vg': 'ceph-8a3e2512-7b8b-5f78-845d-17a09314c972'})  2025-09-11 00:43:13.042734 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:13.042754 | orchestrator | 2025-09-11 00:43:13.042759 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-11 00:43:13.042764 | orchestrator | Thursday 11 September 2025 00:43:06 +0000 (0:00:00.160) 0:00:58.424 **** 2025-09-11 00:43:13.042769 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:13.042773 | orchestrator | 2025-09-11 00:43:13.042778 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-11 00:43:13.042783 | orchestrator | Thursday 11 September 2025 00:43:06 +0000 (0:00:00.127) 0:00:58.551 **** 2025-09-11 00:43:13.042787 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:13.042792 | orchestrator | 2025-09-11 00:43:13.042797 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-11 00:43:13.042802 | orchestrator | Thursday 11 September 2025 00:43:07 +0000 (0:00:00.137) 0:00:58.688 **** 2025-09-11 00:43:13.042807 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:13.042812 | orchestrator | 2025-09-11 00:43:13.042817 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-11 00:43:13.042832 | orchestrator | Thursday 11 September 2025 00:43:07 +0000 (0:00:00.137) 0:00:58.825 **** 2025-09-11 00:43:13.042838 | orchestrator | ok: [testbed-node-5] => { 2025-09-11 00:43:13.042843 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-11 00:43:13.042849 | orchestrator | } 2025-09-11 00:43:13.042854 | orchestrator | 2025-09-11 00:43:13.042859 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-11 00:43:13.042864 | orchestrator | Thursday 11 September 2025 00:43:07 +0000 (0:00:00.145) 0:00:58.971 **** 2025-09-11 00:43:13.042869 | orchestrator | ok: [testbed-node-5] => { 2025-09-11 00:43:13.042874 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-11 00:43:13.042879 | orchestrator | } 2025-09-11 00:43:13.042884 | orchestrator | 2025-09-11 00:43:13.042889 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-11 00:43:13.042895 | orchestrator | Thursday 11 September 2025 00:43:07 +0000 (0:00:00.138) 0:00:59.109 **** 2025-09-11 00:43:13.042900 | orchestrator | ok: [testbed-node-5] => { 2025-09-11 00:43:13.042912 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-11 00:43:13.042918 | orchestrator | } 2025-09-11 00:43:13.042923 | orchestrator | 2025-09-11 00:43:13.042928 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-11 00:43:13.042933 | orchestrator | Thursday 11 September 2025 00:43:07 +0000 (0:00:00.137) 0:00:59.247 **** 2025-09-11 00:43:13.042938 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:43:13.042943 | orchestrator | 2025-09-11 00:43:13.042948 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-11 00:43:13.042953 | orchestrator | Thursday 11 September 2025 00:43:08 +0000 (0:00:00.671) 0:00:59.918 **** 2025-09-11 00:43:13.042958 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:43:13.042963 | orchestrator | 2025-09-11 00:43:13.042968 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-11 00:43:13.042973 | orchestrator | Thursday 11 September 2025 00:43:08 +0000 (0:00:00.541) 0:01:00.459 **** 2025-09-11 00:43:13.043012 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:43:13.043018 | orchestrator | 2025-09-11 00:43:13.043023 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-11 00:43:13.043028 | orchestrator | Thursday 11 September 2025 00:43:09 +0000 (0:00:00.755) 0:01:01.214 **** 2025-09-11 00:43:13.043033 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:43:13.043038 | orchestrator | 2025-09-11 00:43:13.043043 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-11 00:43:13.043048 | orchestrator | Thursday 11 September 2025 00:43:09 +0000 (0:00:00.173) 0:01:01.388 **** 2025-09-11 00:43:13.043053 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:13.043058 | orchestrator | 2025-09-11 00:43:13.043063 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-11 00:43:13.043069 | orchestrator | Thursday 11 September 2025 00:43:09 +0000 (0:00:00.124) 0:01:01.513 **** 2025-09-11 00:43:13.043080 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:13.043085 | orchestrator | 2025-09-11 00:43:13.043090 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-11 00:43:13.043095 | orchestrator | Thursday 11 September 2025 00:43:10 +0000 (0:00:00.097) 0:01:01.610 **** 2025-09-11 00:43:13.043101 | orchestrator | ok: [testbed-node-5] => { 2025-09-11 00:43:13.043106 | orchestrator |  "vgs_report": { 2025-09-11 00:43:13.043111 | orchestrator |  "vg": [] 2025-09-11 00:43:13.043128 | orchestrator |  } 2025-09-11 00:43:13.043133 | orchestrator | } 2025-09-11 00:43:13.043139 | orchestrator | 2025-09-11 00:43:13.043144 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-11 00:43:13.043150 | orchestrator | Thursday 11 September 2025 00:43:10 +0000 (0:00:00.163) 0:01:01.774 **** 2025-09-11 00:43:13.043156 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:13.043162 | orchestrator | 2025-09-11 00:43:13.043168 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-11 00:43:13.043174 | orchestrator | Thursday 11 September 2025 00:43:10 +0000 (0:00:00.171) 0:01:01.945 **** 2025-09-11 00:43:13.043179 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:13.043185 | orchestrator | 2025-09-11 00:43:13.043190 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-11 00:43:13.043196 | orchestrator | Thursday 11 September 2025 00:43:10 +0000 (0:00:00.167) 0:01:02.113 **** 2025-09-11 00:43:13.043202 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:13.043208 | orchestrator | 2025-09-11 00:43:13.043214 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-11 00:43:13.043219 | orchestrator | Thursday 11 September 2025 00:43:10 +0000 (0:00:00.150) 0:01:02.263 **** 2025-09-11 00:43:13.043226 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:13.043232 | orchestrator | 2025-09-11 00:43:13.043237 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-11 00:43:13.043243 | orchestrator | Thursday 11 September 2025 00:43:10 +0000 (0:00:00.162) 0:01:02.426 **** 2025-09-11 00:43:13.043249 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:13.043255 | orchestrator | 2025-09-11 00:43:13.043261 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-11 00:43:13.043266 | orchestrator | Thursday 11 September 2025 00:43:11 +0000 (0:00:00.165) 0:01:02.592 **** 2025-09-11 00:43:13.043272 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:13.043278 | orchestrator | 2025-09-11 00:43:13.043284 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-11 00:43:13.043289 | orchestrator | Thursday 11 September 2025 00:43:11 +0000 (0:00:00.168) 0:01:02.760 **** 2025-09-11 00:43:13.043295 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:13.043301 | orchestrator | 2025-09-11 00:43:13.043306 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-11 00:43:13.043312 | orchestrator | Thursday 11 September 2025 00:43:11 +0000 (0:00:00.151) 0:01:02.912 **** 2025-09-11 00:43:13.043318 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:13.043324 | orchestrator | 2025-09-11 00:43:13.043330 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-11 00:43:13.043335 | orchestrator | Thursday 11 September 2025 00:43:11 +0000 (0:00:00.153) 0:01:03.066 **** 2025-09-11 00:43:13.043341 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:13.043347 | orchestrator | 2025-09-11 00:43:13.043353 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-11 00:43:13.043363 | orchestrator | Thursday 11 September 2025 00:43:11 +0000 (0:00:00.373) 0:01:03.439 **** 2025-09-11 00:43:13.043369 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:13.043375 | orchestrator | 2025-09-11 00:43:13.043380 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-11 00:43:13.043386 | orchestrator | Thursday 11 September 2025 00:43:12 +0000 (0:00:00.151) 0:01:03.591 **** 2025-09-11 00:43:13.043392 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:13.043402 | orchestrator | 2025-09-11 00:43:13.043408 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-11 00:43:13.043414 | orchestrator | Thursday 11 September 2025 00:43:12 +0000 (0:00:00.149) 0:01:03.740 **** 2025-09-11 00:43:13.043420 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:13.043426 | orchestrator | 2025-09-11 00:43:13.043431 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-11 00:43:13.043437 | orchestrator | Thursday 11 September 2025 00:43:12 +0000 (0:00:00.138) 0:01:03.879 **** 2025-09-11 00:43:13.043443 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:13.043449 | orchestrator | 2025-09-11 00:43:13.043455 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-11 00:43:13.043461 | orchestrator | Thursday 11 September 2025 00:43:12 +0000 (0:00:00.139) 0:01:04.019 **** 2025-09-11 00:43:13.043467 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:13.043472 | orchestrator | 2025-09-11 00:43:13.043478 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-11 00:43:13.043484 | orchestrator | Thursday 11 September 2025 00:43:12 +0000 (0:00:00.132) 0:01:04.151 **** 2025-09-11 00:43:13.043490 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fcfbff8-db79-5f3f-a505-ec8e716f38d6', 'data_vg': 'ceph-1fcfbff8-db79-5f3f-a505-ec8e716f38d6'})  2025-09-11 00:43:13.043496 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8a3e2512-7b8b-5f78-845d-17a09314c972', 'data_vg': 'ceph-8a3e2512-7b8b-5f78-845d-17a09314c972'})  2025-09-11 00:43:13.043502 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:13.043507 | orchestrator | 2025-09-11 00:43:13.043512 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-11 00:43:13.043518 | orchestrator | Thursday 11 September 2025 00:43:12 +0000 (0:00:00.148) 0:01:04.300 **** 2025-09-11 00:43:13.043523 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fcfbff8-db79-5f3f-a505-ec8e716f38d6', 'data_vg': 'ceph-1fcfbff8-db79-5f3f-a505-ec8e716f38d6'})  2025-09-11 00:43:13.043528 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8a3e2512-7b8b-5f78-845d-17a09314c972', 'data_vg': 'ceph-8a3e2512-7b8b-5f78-845d-17a09314c972'})  2025-09-11 00:43:13.043533 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:13.043538 | orchestrator | 2025-09-11 00:43:13.043543 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-11 00:43:13.043548 | orchestrator | Thursday 11 September 2025 00:43:12 +0000 (0:00:00.149) 0:01:04.450 **** 2025-09-11 00:43:13.043557 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fcfbff8-db79-5f3f-a505-ec8e716f38d6', 'data_vg': 'ceph-1fcfbff8-db79-5f3f-a505-ec8e716f38d6'})  2025-09-11 00:43:16.126605 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8a3e2512-7b8b-5f78-845d-17a09314c972', 'data_vg': 'ceph-8a3e2512-7b8b-5f78-845d-17a09314c972'})  2025-09-11 00:43:16.126709 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:16.126723 | orchestrator | 2025-09-11 00:43:16.126735 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-11 00:43:16.126747 | orchestrator | Thursday 11 September 2025 00:43:13 +0000 (0:00:00.153) 0:01:04.604 **** 2025-09-11 00:43:16.126759 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fcfbff8-db79-5f3f-a505-ec8e716f38d6', 'data_vg': 'ceph-1fcfbff8-db79-5f3f-a505-ec8e716f38d6'})  2025-09-11 00:43:16.126770 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8a3e2512-7b8b-5f78-845d-17a09314c972', 'data_vg': 'ceph-8a3e2512-7b8b-5f78-845d-17a09314c972'})  2025-09-11 00:43:16.126781 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:16.126792 | orchestrator | 2025-09-11 00:43:16.126803 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-11 00:43:16.126813 | orchestrator | Thursday 11 September 2025 00:43:13 +0000 (0:00:00.142) 0:01:04.746 **** 2025-09-11 00:43:16.126824 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fcfbff8-db79-5f3f-a505-ec8e716f38d6', 'data_vg': 'ceph-1fcfbff8-db79-5f3f-a505-ec8e716f38d6'})  2025-09-11 00:43:16.126862 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8a3e2512-7b8b-5f78-845d-17a09314c972', 'data_vg': 'ceph-8a3e2512-7b8b-5f78-845d-17a09314c972'})  2025-09-11 00:43:16.126873 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:16.126884 | orchestrator | 2025-09-11 00:43:16.126895 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-11 00:43:16.126906 | orchestrator | Thursday 11 September 2025 00:43:13 +0000 (0:00:00.151) 0:01:04.898 **** 2025-09-11 00:43:16.126917 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fcfbff8-db79-5f3f-a505-ec8e716f38d6', 'data_vg': 'ceph-1fcfbff8-db79-5f3f-a505-ec8e716f38d6'})  2025-09-11 00:43:16.126928 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8a3e2512-7b8b-5f78-845d-17a09314c972', 'data_vg': 'ceph-8a3e2512-7b8b-5f78-845d-17a09314c972'})  2025-09-11 00:43:16.126938 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:16.126949 | orchestrator | 2025-09-11 00:43:16.127038 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-11 00:43:16.127050 | orchestrator | Thursday 11 September 2025 00:43:13 +0000 (0:00:00.154) 0:01:05.052 **** 2025-09-11 00:43:16.127061 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fcfbff8-db79-5f3f-a505-ec8e716f38d6', 'data_vg': 'ceph-1fcfbff8-db79-5f3f-a505-ec8e716f38d6'})  2025-09-11 00:43:16.127072 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8a3e2512-7b8b-5f78-845d-17a09314c972', 'data_vg': 'ceph-8a3e2512-7b8b-5f78-845d-17a09314c972'})  2025-09-11 00:43:16.127083 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:16.127094 | orchestrator | 2025-09-11 00:43:16.127104 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-11 00:43:16.127116 | orchestrator | Thursday 11 September 2025 00:43:13 +0000 (0:00:00.310) 0:01:05.363 **** 2025-09-11 00:43:16.127127 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fcfbff8-db79-5f3f-a505-ec8e716f38d6', 'data_vg': 'ceph-1fcfbff8-db79-5f3f-a505-ec8e716f38d6'})  2025-09-11 00:43:16.127140 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8a3e2512-7b8b-5f78-845d-17a09314c972', 'data_vg': 'ceph-8a3e2512-7b8b-5f78-845d-17a09314c972'})  2025-09-11 00:43:16.127152 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:16.127164 | orchestrator | 2025-09-11 00:43:16.127176 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-11 00:43:16.127188 | orchestrator | Thursday 11 September 2025 00:43:13 +0000 (0:00:00.155) 0:01:05.518 **** 2025-09-11 00:43:16.127201 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:43:16.127213 | orchestrator | 2025-09-11 00:43:16.127225 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-11 00:43:16.127237 | orchestrator | Thursday 11 September 2025 00:43:14 +0000 (0:00:00.563) 0:01:06.082 **** 2025-09-11 00:43:16.127249 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:43:16.127261 | orchestrator | 2025-09-11 00:43:16.127273 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-11 00:43:16.127285 | orchestrator | Thursday 11 September 2025 00:43:15 +0000 (0:00:00.614) 0:01:06.696 **** 2025-09-11 00:43:16.127297 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:43:16.127309 | orchestrator | 2025-09-11 00:43:16.127321 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-11 00:43:16.127333 | orchestrator | Thursday 11 September 2025 00:43:15 +0000 (0:00:00.145) 0:01:06.842 **** 2025-09-11 00:43:16.127345 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-1fcfbff8-db79-5f3f-a505-ec8e716f38d6', 'vg_name': 'ceph-1fcfbff8-db79-5f3f-a505-ec8e716f38d6'}) 2025-09-11 00:43:16.127359 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-8a3e2512-7b8b-5f78-845d-17a09314c972', 'vg_name': 'ceph-8a3e2512-7b8b-5f78-845d-17a09314c972'}) 2025-09-11 00:43:16.127370 | orchestrator | 2025-09-11 00:43:16.127382 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-11 00:43:16.127404 | orchestrator | Thursday 11 September 2025 00:43:15 +0000 (0:00:00.178) 0:01:07.020 **** 2025-09-11 00:43:16.127434 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fcfbff8-db79-5f3f-a505-ec8e716f38d6', 'data_vg': 'ceph-1fcfbff8-db79-5f3f-a505-ec8e716f38d6'})  2025-09-11 00:43:16.127447 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8a3e2512-7b8b-5f78-845d-17a09314c972', 'data_vg': 'ceph-8a3e2512-7b8b-5f78-845d-17a09314c972'})  2025-09-11 00:43:16.127459 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:16.127472 | orchestrator | 2025-09-11 00:43:16.127485 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-11 00:43:16.127497 | orchestrator | Thursday 11 September 2025 00:43:15 +0000 (0:00:00.169) 0:01:07.189 **** 2025-09-11 00:43:16.127507 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fcfbff8-db79-5f3f-a505-ec8e716f38d6', 'data_vg': 'ceph-1fcfbff8-db79-5f3f-a505-ec8e716f38d6'})  2025-09-11 00:43:16.127543 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8a3e2512-7b8b-5f78-845d-17a09314c972', 'data_vg': 'ceph-8a3e2512-7b8b-5f78-845d-17a09314c972'})  2025-09-11 00:43:16.127555 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:16.127566 | orchestrator | 2025-09-11 00:43:16.127577 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-11 00:43:16.127588 | orchestrator | Thursday 11 September 2025 00:43:15 +0000 (0:00:00.164) 0:01:07.354 **** 2025-09-11 00:43:16.127599 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fcfbff8-db79-5f3f-a505-ec8e716f38d6', 'data_vg': 'ceph-1fcfbff8-db79-5f3f-a505-ec8e716f38d6'})  2025-09-11 00:43:16.127628 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8a3e2512-7b8b-5f78-845d-17a09314c972', 'data_vg': 'ceph-8a3e2512-7b8b-5f78-845d-17a09314c972'})  2025-09-11 00:43:16.127639 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:16.127650 | orchestrator | 2025-09-11 00:43:16.127661 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-11 00:43:16.127672 | orchestrator | Thursday 11 September 2025 00:43:15 +0000 (0:00:00.161) 0:01:07.515 **** 2025-09-11 00:43:16.127682 | orchestrator | ok: [testbed-node-5] => { 2025-09-11 00:43:16.127693 | orchestrator |  "lvm_report": { 2025-09-11 00:43:16.127705 | orchestrator |  "lv": [ 2025-09-11 00:43:16.127716 | orchestrator |  { 2025-09-11 00:43:16.127727 | orchestrator |  "lv_name": "osd-block-1fcfbff8-db79-5f3f-a505-ec8e716f38d6", 2025-09-11 00:43:16.127743 | orchestrator |  "vg_name": "ceph-1fcfbff8-db79-5f3f-a505-ec8e716f38d6" 2025-09-11 00:43:16.127754 | orchestrator |  }, 2025-09-11 00:43:16.127781 | orchestrator |  { 2025-09-11 00:43:16.127792 | orchestrator |  "lv_name": "osd-block-8a3e2512-7b8b-5f78-845d-17a09314c972", 2025-09-11 00:43:16.127825 | orchestrator |  "vg_name": "ceph-8a3e2512-7b8b-5f78-845d-17a09314c972" 2025-09-11 00:43:16.127836 | orchestrator |  } 2025-09-11 00:43:16.127847 | orchestrator |  ], 2025-09-11 00:43:16.127857 | orchestrator |  "pv": [ 2025-09-11 00:43:16.127868 | orchestrator |  { 2025-09-11 00:43:16.127879 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-11 00:43:16.127890 | orchestrator |  "vg_name": "ceph-1fcfbff8-db79-5f3f-a505-ec8e716f38d6" 2025-09-11 00:43:16.127900 | orchestrator |  }, 2025-09-11 00:43:16.127911 | orchestrator |  { 2025-09-11 00:43:16.127922 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-11 00:43:16.127932 | orchestrator |  "vg_name": "ceph-8a3e2512-7b8b-5f78-845d-17a09314c972" 2025-09-11 00:43:16.127954 | orchestrator |  } 2025-09-11 00:43:16.127992 | orchestrator |  ] 2025-09-11 00:43:16.128004 | orchestrator |  } 2025-09-11 00:43:16.128015 | orchestrator | } 2025-09-11 00:43:16.128026 | orchestrator | 2025-09-11 00:43:16.128037 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:43:16.128070 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-11 00:43:16.128082 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-11 00:43:16.128093 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-11 00:43:16.128103 | orchestrator | 2025-09-11 00:43:16.128129 | orchestrator | 2025-09-11 00:43:16.128140 | orchestrator | 2025-09-11 00:43:16.128151 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:43:16.128162 | orchestrator | Thursday 11 September 2025 00:43:16 +0000 (0:00:00.148) 0:01:07.663 **** 2025-09-11 00:43:16.128173 | orchestrator | =============================================================================== 2025-09-11 00:43:16.128183 | orchestrator | Create block VGs -------------------------------------------------------- 5.69s 2025-09-11 00:43:16.128194 | orchestrator | Create block LVs -------------------------------------------------------- 4.12s 2025-09-11 00:43:16.128205 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.96s 2025-09-11 00:43:16.128215 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.79s 2025-09-11 00:43:16.128226 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.62s 2025-09-11 00:43:16.128236 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.61s 2025-09-11 00:43:16.128247 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.61s 2025-09-11 00:43:16.128261 | orchestrator | Add known partitions to the list of available block devices ------------- 1.30s 2025-09-11 00:43:16.128291 | orchestrator | Add known links to the list of available block devices ------------------ 1.16s 2025-09-11 00:43:16.439791 | orchestrator | Add known partitions to the list of available block devices ------------- 0.82s 2025-09-11 00:43:16.439915 | orchestrator | Print LVM report data --------------------------------------------------- 0.80s 2025-09-11 00:43:16.439930 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.73s 2025-09-11 00:43:16.439942 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2025-09-11 00:43:16.439952 | orchestrator | Get initial list of available block devices ----------------------------- 0.65s 2025-09-11 00:43:16.439963 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2025-09-11 00:43:16.440013 | orchestrator | Print size needed for WAL LVs on ceph_db_wal_devices -------------------- 0.64s 2025-09-11 00:43:16.440025 | orchestrator | Add known partitions to the list of available block devices ------------- 0.62s 2025-09-11 00:43:16.440036 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.61s 2025-09-11 00:43:16.440047 | orchestrator | Add known partitions to the list of available block devices ------------- 0.61s 2025-09-11 00:43:16.440058 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2025-09-11 00:43:28.661462 | orchestrator | 2025-09-11 00:43:28 | INFO  | Task 74db3e9a-23d7-4fdf-b1cd-b43069f44716 (facts) was prepared for execution. 2025-09-11 00:43:28.661587 | orchestrator | 2025-09-11 00:43:28 | INFO  | It takes a moment until task 74db3e9a-23d7-4fdf-b1cd-b43069f44716 (facts) has been started and output is visible here. 2025-09-11 00:43:41.220198 | orchestrator | 2025-09-11 00:43:41.220326 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-11 00:43:41.220345 | orchestrator | 2025-09-11 00:43:41.220898 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-11 00:43:41.220917 | orchestrator | Thursday 11 September 2025 00:43:32 +0000 (0:00:00.270) 0:00:00.270 **** 2025-09-11 00:43:41.220931 | orchestrator | ok: [testbed-manager] 2025-09-11 00:43:41.220944 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:43:41.221007 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:43:41.221020 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:43:41.221032 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:43:41.221044 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:43:41.221056 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:43:41.221068 | orchestrator | 2025-09-11 00:43:41.221081 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-11 00:43:41.221093 | orchestrator | Thursday 11 September 2025 00:43:33 +0000 (0:00:01.082) 0:00:01.353 **** 2025-09-11 00:43:41.221119 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:43:41.221131 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:43:41.221143 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:43:41.221154 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:43:41.221164 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:43:41.221175 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:43:41.221185 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:41.221196 | orchestrator | 2025-09-11 00:43:41.221207 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-11 00:43:41.221217 | orchestrator | 2025-09-11 00:43:41.221228 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-11 00:43:41.221239 | orchestrator | Thursday 11 September 2025 00:43:34 +0000 (0:00:01.250) 0:00:02.603 **** 2025-09-11 00:43:41.221250 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:43:41.221260 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:43:41.221271 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:43:41.221282 | orchestrator | ok: [testbed-manager] 2025-09-11 00:43:41.221292 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:43:41.221303 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:43:41.221314 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:43:41.221324 | orchestrator | 2025-09-11 00:43:41.221335 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-11 00:43:41.221346 | orchestrator | 2025-09-11 00:43:41.221357 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-11 00:43:41.221368 | orchestrator | Thursday 11 September 2025 00:43:40 +0000 (0:00:05.383) 0:00:07.987 **** 2025-09-11 00:43:41.221379 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:43:41.221389 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:43:41.221400 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:43:41.221411 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:43:41.221421 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:43:41.221432 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:43:41.221442 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:43:41.221453 | orchestrator | 2025-09-11 00:43:41.221464 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:43:41.221475 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-11 00:43:41.221487 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-11 00:43:41.221498 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-11 00:43:41.221509 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-11 00:43:41.221519 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-11 00:43:41.221530 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-11 00:43:41.221541 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-11 00:43:41.221565 | orchestrator | 2025-09-11 00:43:41.221576 | orchestrator | 2025-09-11 00:43:41.221587 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:43:41.221597 | orchestrator | Thursday 11 September 2025 00:43:40 +0000 (0:00:00.528) 0:00:08.515 **** 2025-09-11 00:43:41.221608 | orchestrator | =============================================================================== 2025-09-11 00:43:41.221619 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.38s 2025-09-11 00:43:41.221629 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.25s 2025-09-11 00:43:41.221640 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.08s 2025-09-11 00:43:41.221650 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2025-09-11 00:43:53.390752 | orchestrator | 2025-09-11 00:43:53 | INFO  | Task 62e80ab5-2b75-4309-adc7-b093b1f51b48 (frr) was prepared for execution. 2025-09-11 00:43:53.390884 | orchestrator | 2025-09-11 00:43:53 | INFO  | It takes a moment until task 62e80ab5-2b75-4309-adc7-b093b1f51b48 (frr) has been started and output is visible here. 2025-09-11 00:44:18.152365 | orchestrator | 2025-09-11 00:44:18.152455 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-09-11 00:44:18.152470 | orchestrator | 2025-09-11 00:44:18.152482 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-09-11 00:44:18.152494 | orchestrator | Thursday 11 September 2025 00:43:57 +0000 (0:00:00.232) 0:00:00.232 **** 2025-09-11 00:44:18.152505 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-09-11 00:44:18.152517 | orchestrator | 2025-09-11 00:44:18.152529 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-09-11 00:44:18.152540 | orchestrator | Thursday 11 September 2025 00:43:57 +0000 (0:00:00.218) 0:00:00.451 **** 2025-09-11 00:44:18.152551 | orchestrator | changed: [testbed-manager] 2025-09-11 00:44:18.152562 | orchestrator | 2025-09-11 00:44:18.152573 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-09-11 00:44:18.152583 | orchestrator | Thursday 11 September 2025 00:43:58 +0000 (0:00:01.094) 0:00:01.546 **** 2025-09-11 00:44:18.152594 | orchestrator | changed: [testbed-manager] 2025-09-11 00:44:18.152605 | orchestrator | 2025-09-11 00:44:18.152628 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-09-11 00:44:18.152639 | orchestrator | Thursday 11 September 2025 00:44:08 +0000 (0:00:09.507) 0:00:11.053 **** 2025-09-11 00:44:18.152650 | orchestrator | ok: [testbed-manager] 2025-09-11 00:44:18.152662 | orchestrator | 2025-09-11 00:44:18.152673 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-09-11 00:44:18.152684 | orchestrator | Thursday 11 September 2025 00:44:09 +0000 (0:00:01.214) 0:00:12.267 **** 2025-09-11 00:44:18.152694 | orchestrator | changed: [testbed-manager] 2025-09-11 00:44:18.152705 | orchestrator | 2025-09-11 00:44:18.152716 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-09-11 00:44:18.152726 | orchestrator | Thursday 11 September 2025 00:44:10 +0000 (0:00:00.909) 0:00:13.177 **** 2025-09-11 00:44:18.152737 | orchestrator | ok: [testbed-manager] 2025-09-11 00:44:18.152748 | orchestrator | 2025-09-11 00:44:18.152758 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-09-11 00:44:18.152770 | orchestrator | Thursday 11 September 2025 00:44:11 +0000 (0:00:01.136) 0:00:14.313 **** 2025-09-11 00:44:18.152781 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-11 00:44:18.152791 | orchestrator | 2025-09-11 00:44:18.152802 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-09-11 00:44:18.152813 | orchestrator | Thursday 11 September 2025 00:44:12 +0000 (0:00:00.800) 0:00:15.114 **** 2025-09-11 00:44:18.152823 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:44:18.152834 | orchestrator | 2025-09-11 00:44:18.152845 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-09-11 00:44:18.152876 | orchestrator | Thursday 11 September 2025 00:44:12 +0000 (0:00:00.165) 0:00:15.280 **** 2025-09-11 00:44:18.152887 | orchestrator | changed: [testbed-manager] 2025-09-11 00:44:18.152898 | orchestrator | 2025-09-11 00:44:18.152909 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-09-11 00:44:18.152948 | orchestrator | Thursday 11 September 2025 00:44:13 +0000 (0:00:00.922) 0:00:16.202 **** 2025-09-11 00:44:18.152962 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-09-11 00:44:18.152974 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-09-11 00:44:18.152987 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-09-11 00:44:18.153000 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-09-11 00:44:18.153012 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-09-11 00:44:18.153025 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-09-11 00:44:18.153037 | orchestrator | 2025-09-11 00:44:18.153050 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-09-11 00:44:18.153062 | orchestrator | Thursday 11 September 2025 00:44:15 +0000 (0:00:02.139) 0:00:18.342 **** 2025-09-11 00:44:18.153074 | orchestrator | ok: [testbed-manager] 2025-09-11 00:44:18.153086 | orchestrator | 2025-09-11 00:44:18.153098 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-09-11 00:44:18.153110 | orchestrator | Thursday 11 September 2025 00:44:16 +0000 (0:00:01.234) 0:00:19.576 **** 2025-09-11 00:44:18.153122 | orchestrator | changed: [testbed-manager] 2025-09-11 00:44:18.153134 | orchestrator | 2025-09-11 00:44:18.153146 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:44:18.153158 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-11 00:44:18.153170 | orchestrator | 2025-09-11 00:44:18.153182 | orchestrator | 2025-09-11 00:44:18.153206 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:44:18.153229 | orchestrator | Thursday 11 September 2025 00:44:17 +0000 (0:00:01.285) 0:00:20.862 **** 2025-09-11 00:44:18.153242 | orchestrator | =============================================================================== 2025-09-11 00:44:18.153255 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.51s 2025-09-11 00:44:18.153267 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.14s 2025-09-11 00:44:18.153279 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.29s 2025-09-11 00:44:18.153292 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.23s 2025-09-11 00:44:18.153317 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.21s 2025-09-11 00:44:18.153328 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.14s 2025-09-11 00:44:18.153339 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.09s 2025-09-11 00:44:18.153349 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 0.92s 2025-09-11 00:44:18.153360 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.91s 2025-09-11 00:44:18.153371 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.80s 2025-09-11 00:44:18.153381 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.22s 2025-09-11 00:44:18.153392 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.17s 2025-09-11 00:44:18.332527 | orchestrator | 2025-09-11 00:44:18.334359 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Thu Sep 11 00:44:18 UTC 2025 2025-09-11 00:44:18.334400 | orchestrator | 2025-09-11 00:44:19.979976 | orchestrator | 2025-09-11 00:44:19 | INFO  | Collection nutshell is prepared for execution 2025-09-11 00:44:19.980060 | orchestrator | 2025-09-11 00:44:19 | INFO  | D [0] - dotfiles 2025-09-11 00:44:30.175994 | orchestrator | 2025-09-11 00:44:30 | INFO  | D [0] - homer 2025-09-11 00:44:30.176071 | orchestrator | 2025-09-11 00:44:30 | INFO  | D [0] - netdata 2025-09-11 00:44:30.176081 | orchestrator | 2025-09-11 00:44:30 | INFO  | D [0] - openstackclient 2025-09-11 00:44:30.176089 | orchestrator | 2025-09-11 00:44:30 | INFO  | D [0] - phpmyadmin 2025-09-11 00:44:30.176096 | orchestrator | 2025-09-11 00:44:30 | INFO  | A [0] - common 2025-09-11 00:44:30.177632 | orchestrator | 2025-09-11 00:44:30 | INFO  | A [1] -- loadbalancer 2025-09-11 00:44:30.177936 | orchestrator | 2025-09-11 00:44:30 | INFO  | D [2] --- opensearch 2025-09-11 00:44:30.178068 | orchestrator | 2025-09-11 00:44:30 | INFO  | A [2] --- mariadb-ng 2025-09-11 00:44:30.178584 | orchestrator | 2025-09-11 00:44:30 | INFO  | D [3] ---- horizon 2025-09-11 00:44:30.178648 | orchestrator | 2025-09-11 00:44:30 | INFO  | A [3] ---- keystone 2025-09-11 00:44:30.178664 | orchestrator | 2025-09-11 00:44:30 | INFO  | A [4] ----- neutron 2025-09-11 00:44:30.178856 | orchestrator | 2025-09-11 00:44:30 | INFO  | D [5] ------ wait-for-nova 2025-09-11 00:44:30.179211 | orchestrator | 2025-09-11 00:44:30 | INFO  | A [5] ------ octavia 2025-09-11 00:44:30.180194 | orchestrator | 2025-09-11 00:44:30 | INFO  | D [4] ----- barbican 2025-09-11 00:44:30.180375 | orchestrator | 2025-09-11 00:44:30 | INFO  | D [4] ----- designate 2025-09-11 00:44:30.180680 | orchestrator | 2025-09-11 00:44:30 | INFO  | D [4] ----- ironic 2025-09-11 00:44:30.181010 | orchestrator | 2025-09-11 00:44:30 | INFO  | D [4] ----- placement 2025-09-11 00:44:30.181031 | orchestrator | 2025-09-11 00:44:30 | INFO  | D [4] ----- magnum 2025-09-11 00:44:30.181724 | orchestrator | 2025-09-11 00:44:30 | INFO  | A [1] -- openvswitch 2025-09-11 00:44:30.181991 | orchestrator | 2025-09-11 00:44:30 | INFO  | D [2] --- ovn 2025-09-11 00:44:30.182140 | orchestrator | 2025-09-11 00:44:30 | INFO  | D [1] -- memcached 2025-09-11 00:44:30.182430 | orchestrator | 2025-09-11 00:44:30 | INFO  | D [1] -- redis 2025-09-11 00:44:30.182450 | orchestrator | 2025-09-11 00:44:30 | INFO  | D [1] -- rabbitmq-ng 2025-09-11 00:44:30.182862 | orchestrator | 2025-09-11 00:44:30 | INFO  | A [0] - kubernetes 2025-09-11 00:44:30.185129 | orchestrator | 2025-09-11 00:44:30 | INFO  | D [1] -- kubeconfig 2025-09-11 00:44:30.185320 | orchestrator | 2025-09-11 00:44:30 | INFO  | A [1] -- copy-kubeconfig 2025-09-11 00:44:30.185498 | orchestrator | 2025-09-11 00:44:30 | INFO  | A [0] - ceph 2025-09-11 00:44:30.187639 | orchestrator | 2025-09-11 00:44:30 | INFO  | A [1] -- ceph-pools 2025-09-11 00:44:30.187737 | orchestrator | 2025-09-11 00:44:30 | INFO  | A [2] --- copy-ceph-keys 2025-09-11 00:44:30.187820 | orchestrator | 2025-09-11 00:44:30 | INFO  | A [3] ---- cephclient 2025-09-11 00:44:30.188148 | orchestrator | 2025-09-11 00:44:30 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-09-11 00:44:30.188168 | orchestrator | 2025-09-11 00:44:30 | INFO  | A [4] ----- wait-for-keystone 2025-09-11 00:44:30.188251 | orchestrator | 2025-09-11 00:44:30 | INFO  | D [5] ------ kolla-ceph-rgw 2025-09-11 00:44:30.188266 | orchestrator | 2025-09-11 00:44:30 | INFO  | D [5] ------ glance 2025-09-11 00:44:30.188503 | orchestrator | 2025-09-11 00:44:30 | INFO  | D [5] ------ cinder 2025-09-11 00:44:30.188771 | orchestrator | 2025-09-11 00:44:30 | INFO  | D [5] ------ nova 2025-09-11 00:44:30.189029 | orchestrator | 2025-09-11 00:44:30 | INFO  | A [4] ----- prometheus 2025-09-11 00:44:30.189054 | orchestrator | 2025-09-11 00:44:30 | INFO  | D [5] ------ grafana 2025-09-11 00:44:30.362464 | orchestrator | 2025-09-11 00:44:30 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-09-11 00:44:30.362539 | orchestrator | 2025-09-11 00:44:30 | INFO  | Tasks are running in the background 2025-09-11 00:44:32.830336 | orchestrator | 2025-09-11 00:44:32 | INFO  | No task IDs specified, wait for all currently running tasks 2025-09-11 00:44:34.940023 | orchestrator | 2025-09-11 00:44:34 | INFO  | Task f542807a-8fce-4f7b-9504-bc05001f691e is in state STARTED 2025-09-11 00:44:34.940104 | orchestrator | 2025-09-11 00:44:34 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:44:34.941374 | orchestrator | 2025-09-11 00:44:34 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:44:34.942140 | orchestrator | 2025-09-11 00:44:34 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:44:34.944715 | orchestrator | 2025-09-11 00:44:34 | INFO  | Task 9c7e6f10-f7d1-41cb-a45a-24fb48f722a0 is in state STARTED 2025-09-11 00:44:34.945058 | orchestrator | 2025-09-11 00:44:34 | INFO  | Task 8b271ae0-cbf5-4be9-8d89-6f0e25385dfe is in state STARTED 2025-09-11 00:44:34.945636 | orchestrator | 2025-09-11 00:44:34 | INFO  | Task 08e0d8d4-a4e5-494f-9912-73c71af3bb3a is in state STARTED 2025-09-11 00:44:34.945664 | orchestrator | 2025-09-11 00:44:34 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:44:37.978872 | orchestrator | 2025-09-11 00:44:37 | INFO  | Task f542807a-8fce-4f7b-9504-bc05001f691e is in state STARTED 2025-09-11 00:44:37.979104 | orchestrator | 2025-09-11 00:44:37 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:44:37.979127 | orchestrator | 2025-09-11 00:44:37 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:44:37.979150 | orchestrator | 2025-09-11 00:44:37 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:44:37.979588 | orchestrator | 2025-09-11 00:44:37 | INFO  | Task 9c7e6f10-f7d1-41cb-a45a-24fb48f722a0 is in state STARTED 2025-09-11 00:44:37.980198 | orchestrator | 2025-09-11 00:44:37 | INFO  | Task 8b271ae0-cbf5-4be9-8d89-6f0e25385dfe is in state STARTED 2025-09-11 00:44:37.980772 | orchestrator | 2025-09-11 00:44:37 | INFO  | Task 08e0d8d4-a4e5-494f-9912-73c71af3bb3a is in state STARTED 2025-09-11 00:44:37.980934 | orchestrator | 2025-09-11 00:44:37 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:44:41.018227 | orchestrator | 2025-09-11 00:44:41 | INFO  | Task f542807a-8fce-4f7b-9504-bc05001f691e is in state STARTED 2025-09-11 00:44:41.018347 | orchestrator | 2025-09-11 00:44:41 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:44:41.018954 | orchestrator | 2025-09-11 00:44:41 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:44:41.019482 | orchestrator | 2025-09-11 00:44:41 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:44:41.019998 | orchestrator | 2025-09-11 00:44:41 | INFO  | Task 9c7e6f10-f7d1-41cb-a45a-24fb48f722a0 is in state STARTED 2025-09-11 00:44:41.020532 | orchestrator | 2025-09-11 00:44:41 | INFO  | Task 8b271ae0-cbf5-4be9-8d89-6f0e25385dfe is in state STARTED 2025-09-11 00:44:41.021056 | orchestrator | 2025-09-11 00:44:41 | INFO  | Task 08e0d8d4-a4e5-494f-9912-73c71af3bb3a is in state STARTED 2025-09-11 00:44:41.021153 | orchestrator | 2025-09-11 00:44:41 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:44:44.073510 | orchestrator | 2025-09-11 00:44:44 | INFO  | Task f542807a-8fce-4f7b-9504-bc05001f691e is in state STARTED 2025-09-11 00:44:44.073634 | orchestrator | 2025-09-11 00:44:44 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:44:44.074306 | orchestrator | 2025-09-11 00:44:44 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:44:44.074635 | orchestrator | 2025-09-11 00:44:44 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:44:44.075169 | orchestrator | 2025-09-11 00:44:44 | INFO  | Task 9c7e6f10-f7d1-41cb-a45a-24fb48f722a0 is in state STARTED 2025-09-11 00:44:44.076001 | orchestrator | 2025-09-11 00:44:44 | INFO  | Task 8b271ae0-cbf5-4be9-8d89-6f0e25385dfe is in state STARTED 2025-09-11 00:44:44.076456 | orchestrator | 2025-09-11 00:44:44 | INFO  | Task 08e0d8d4-a4e5-494f-9912-73c71af3bb3a is in state STARTED 2025-09-11 00:44:44.077425 | orchestrator | 2025-09-11 00:44:44 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:44:47.428216 | orchestrator | 2025-09-11 00:44:47 | INFO  | Task f542807a-8fce-4f7b-9504-bc05001f691e is in state STARTED 2025-09-11 00:44:47.428589 | orchestrator | 2025-09-11 00:44:47 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:44:47.430746 | orchestrator | 2025-09-11 00:44:47 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:44:47.430772 | orchestrator | 2025-09-11 00:44:47 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:44:47.430780 | orchestrator | 2025-09-11 00:44:47 | INFO  | Task 9c7e6f10-f7d1-41cb-a45a-24fb48f722a0 is in state STARTED 2025-09-11 00:44:47.431309 | orchestrator | 2025-09-11 00:44:47 | INFO  | Task 8b271ae0-cbf5-4be9-8d89-6f0e25385dfe is in state STARTED 2025-09-11 00:44:47.432161 | orchestrator | 2025-09-11 00:44:47 | INFO  | Task 08e0d8d4-a4e5-494f-9912-73c71af3bb3a is in state STARTED 2025-09-11 00:44:47.432184 | orchestrator | 2025-09-11 00:44:47 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:44:50.573009 | orchestrator | 2025-09-11 00:44:50 | INFO  | Task f542807a-8fce-4f7b-9504-bc05001f691e is in state STARTED 2025-09-11 00:44:50.573126 | orchestrator | 2025-09-11 00:44:50 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:44:50.573141 | orchestrator | 2025-09-11 00:44:50 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:44:50.573152 | orchestrator | 2025-09-11 00:44:50 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:44:50.574372 | orchestrator | 2025-09-11 00:44:50 | INFO  | Task 9c7e6f10-f7d1-41cb-a45a-24fb48f722a0 is in state STARTED 2025-09-11 00:44:50.575083 | orchestrator | 2025-09-11 00:44:50 | INFO  | Task 8b271ae0-cbf5-4be9-8d89-6f0e25385dfe is in state STARTED 2025-09-11 00:44:50.578799 | orchestrator | 2025-09-11 00:44:50 | INFO  | Task 08e0d8d4-a4e5-494f-9912-73c71af3bb3a is in state STARTED 2025-09-11 00:44:50.578819 | orchestrator | 2025-09-11 00:44:50 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:44:53.711171 | orchestrator | 2025-09-11 00:44:53 | INFO  | Task f542807a-8fce-4f7b-9504-bc05001f691e is in state STARTED 2025-09-11 00:44:53.715742 | orchestrator | 2025-09-11 00:44:53 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:44:53.728221 | orchestrator | 2025-09-11 00:44:53 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:44:53.733960 | orchestrator | 2025-09-11 00:44:53 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:44:53.737872 | orchestrator | 2025-09-11 00:44:53 | INFO  | Task 9c7e6f10-f7d1-41cb-a45a-24fb48f722a0 is in state STARTED 2025-09-11 00:44:53.826739 | orchestrator | 2025-09-11 00:44:53 | INFO  | Task 8b271ae0-cbf5-4be9-8d89-6f0e25385dfe is in state STARTED 2025-09-11 00:44:53.826816 | orchestrator | 2025-09-11 00:44:53 | INFO  | Task 08e0d8d4-a4e5-494f-9912-73c71af3bb3a is in state STARTED 2025-09-11 00:44:53.826829 | orchestrator | 2025-09-11 00:44:53 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:44:56.842616 | orchestrator | 2025-09-11 00:44:56.842706 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-09-11 00:44:56.842721 | orchestrator | 2025-09-11 00:44:56.842733 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-09-11 00:44:56.842744 | orchestrator | Thursday 11 September 2025 00:44:41 +0000 (0:00:00.541) 0:00:00.541 **** 2025-09-11 00:44:56.842755 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:44:56.842767 | orchestrator | changed: [testbed-manager] 2025-09-11 00:44:56.842777 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:44:56.842788 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:44:56.842798 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:44:56.842809 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:44:56.842819 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:44:56.842830 | orchestrator | 2025-09-11 00:44:56.842841 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-09-11 00:44:56.842852 | orchestrator | Thursday 11 September 2025 00:44:45 +0000 (0:00:04.337) 0:00:04.879 **** 2025-09-11 00:44:56.842863 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-11 00:44:56.842874 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-11 00:44:56.842932 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-11 00:44:56.842943 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-11 00:44:56.842954 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-11 00:44:56.842964 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-11 00:44:56.842975 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-11 00:44:56.842986 | orchestrator | 2025-09-11 00:44:56.842997 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-09-11 00:44:56.843008 | orchestrator | Thursday 11 September 2025 00:44:47 +0000 (0:00:02.152) 0:00:07.031 **** 2025-09-11 00:44:56.843023 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-11 00:44:46.509762', 'end': '2025-09-11 00:44:46.516034', 'delta': '0:00:00.006272', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-11 00:44:56.843051 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-11 00:44:46.542186', 'end': '2025-09-11 00:44:46.551719', 'delta': '0:00:00.009533', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-11 00:44:56.843087 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-11 00:44:46.740838', 'end': '2025-09-11 00:44:46.750249', 'delta': '0:00:00.009411', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-11 00:44:56.843141 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-11 00:44:46.906422', 'end': '2025-09-11 00:44:46.919047', 'delta': '0:00:00.012625', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-11 00:44:56.843160 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-11 00:44:47.493211', 'end': '2025-09-11 00:44:47.500831', 'delta': '0:00:00.007620', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-11 00:44:56.843172 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-11 00:44:47.619088', 'end': '2025-09-11 00:44:47.628260', 'delta': '0:00:00.009172', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-11 00:44:56.843191 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-11 00:44:47.253362', 'end': '2025-09-11 00:44:47.264095', 'delta': '0:00:00.010733', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-11 00:44:56.843219 | orchestrator | 2025-09-11 00:44:56.843232 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-09-11 00:44:56.843244 | orchestrator | Thursday 11 September 2025 00:44:50 +0000 (0:00:02.239) 0:00:09.271 **** 2025-09-11 00:44:56.843256 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-11 00:44:56.843269 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-11 00:44:56.843281 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-11 00:44:56.843294 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-11 00:44:56.843306 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-11 00:44:56.843318 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-11 00:44:56.843330 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-11 00:44:56.843342 | orchestrator | 2025-09-11 00:44:56.843354 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-09-11 00:44:56.843367 | orchestrator | Thursday 11 September 2025 00:44:51 +0000 (0:00:01.698) 0:00:10.969 **** 2025-09-11 00:44:56.843379 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-09-11 00:44:56.843392 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-09-11 00:44:56.843404 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-09-11 00:44:56.843416 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-09-11 00:44:56.843429 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-09-11 00:44:56.843441 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-09-11 00:44:56.843453 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-09-11 00:44:56.843464 | orchestrator | 2025-09-11 00:44:56.843475 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:44:56.843493 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:44:56.843506 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:44:56.843517 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:44:56.843528 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:44:56.843539 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:44:56.843549 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:44:56.843560 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:44:56.843570 | orchestrator | 2025-09-11 00:44:56.843581 | orchestrator | 2025-09-11 00:44:56.843592 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:44:56.843602 | orchestrator | Thursday 11 September 2025 00:44:54 +0000 (0:00:03.031) 0:00:14.001 **** 2025-09-11 00:44:56.843613 | orchestrator | =============================================================================== 2025-09-11 00:44:56.843624 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.34s 2025-09-11 00:44:56.843634 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.03s 2025-09-11 00:44:56.843653 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.24s 2025-09-11 00:44:56.843663 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.15s 2025-09-11 00:44:56.843674 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.70s 2025-09-11 00:44:56.843684 | orchestrator | 2025-09-11 00:44:56 | INFO  | Task f542807a-8fce-4f7b-9504-bc05001f691e is in state SUCCESS 2025-09-11 00:44:56.843696 | orchestrator | 2025-09-11 00:44:56 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:44:56.843959 | orchestrator | 2025-09-11 00:44:56 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:44:56.850004 | orchestrator | 2025-09-11 00:44:56 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:44:56.850067 | orchestrator | 2025-09-11 00:44:56 | INFO  | Task 9c7e6f10-f7d1-41cb-a45a-24fb48f722a0 is in state STARTED 2025-09-11 00:44:56.850089 | orchestrator | 2025-09-11 00:44:56 | INFO  | Task 8b271ae0-cbf5-4be9-8d89-6f0e25385dfe is in state STARTED 2025-09-11 00:44:56.850101 | orchestrator | 2025-09-11 00:44:56 | INFO  | Task 27629765-83a5-495f-bc65-0ba25cd68ecb is in state STARTED 2025-09-11 00:44:56.850112 | orchestrator | 2025-09-11 00:44:56 | INFO  | Task 08e0d8d4-a4e5-494f-9912-73c71af3bb3a is in state STARTED 2025-09-11 00:44:56.850123 | orchestrator | 2025-09-11 00:44:56 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:44:59.885260 | orchestrator | 2025-09-11 00:44:59 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:44:59.885528 | orchestrator | 2025-09-11 00:44:59 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:44:59.888420 | orchestrator | 2025-09-11 00:44:59 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:44:59.888824 | orchestrator | 2025-09-11 00:44:59 | INFO  | Task 9c7e6f10-f7d1-41cb-a45a-24fb48f722a0 is in state STARTED 2025-09-11 00:44:59.889480 | orchestrator | 2025-09-11 00:44:59 | INFO  | Task 8b271ae0-cbf5-4be9-8d89-6f0e25385dfe is in state STARTED 2025-09-11 00:44:59.892648 | orchestrator | 2025-09-11 00:44:59 | INFO  | Task 27629765-83a5-495f-bc65-0ba25cd68ecb is in state STARTED 2025-09-11 00:44:59.894168 | orchestrator | 2025-09-11 00:44:59 | INFO  | Task 08e0d8d4-a4e5-494f-9912-73c71af3bb3a is in state STARTED 2025-09-11 00:44:59.894201 | orchestrator | 2025-09-11 00:44:59 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:45:02.943275 | orchestrator | 2025-09-11 00:45:02 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:45:02.944155 | orchestrator | 2025-09-11 00:45:02 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:45:02.945743 | orchestrator | 2025-09-11 00:45:02 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:45:02.946580 | orchestrator | 2025-09-11 00:45:02 | INFO  | Task 9c7e6f10-f7d1-41cb-a45a-24fb48f722a0 is in state STARTED 2025-09-11 00:45:02.947846 | orchestrator | 2025-09-11 00:45:02 | INFO  | Task 8b271ae0-cbf5-4be9-8d89-6f0e25385dfe is in state STARTED 2025-09-11 00:45:02.949616 | orchestrator | 2025-09-11 00:45:02 | INFO  | Task 27629765-83a5-495f-bc65-0ba25cd68ecb is in state STARTED 2025-09-11 00:45:02.951616 | orchestrator | 2025-09-11 00:45:02 | INFO  | Task 08e0d8d4-a4e5-494f-9912-73c71af3bb3a is in state STARTED 2025-09-11 00:45:02.951698 | orchestrator | 2025-09-11 00:45:02 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:45:06.031829 | orchestrator | 2025-09-11 00:45:06 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:45:06.031978 | orchestrator | 2025-09-11 00:45:06 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:45:06.032373 | orchestrator | 2025-09-11 00:45:06 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:45:06.035790 | orchestrator | 2025-09-11 00:45:06 | INFO  | Task 9c7e6f10-f7d1-41cb-a45a-24fb48f722a0 is in state STARTED 2025-09-11 00:45:06.035853 | orchestrator | 2025-09-11 00:45:06 | INFO  | Task 8b271ae0-cbf5-4be9-8d89-6f0e25385dfe is in state STARTED 2025-09-11 00:45:06.036236 | orchestrator | 2025-09-11 00:45:06 | INFO  | Task 27629765-83a5-495f-bc65-0ba25cd68ecb is in state STARTED 2025-09-11 00:45:06.037252 | orchestrator | 2025-09-11 00:45:06 | INFO  | Task 08e0d8d4-a4e5-494f-9912-73c71af3bb3a is in state STARTED 2025-09-11 00:45:06.037275 | orchestrator | 2025-09-11 00:45:06 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:45:09.132787 | orchestrator | 2025-09-11 00:45:09 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:45:09.132937 | orchestrator | 2025-09-11 00:45:09 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:45:09.132956 | orchestrator | 2025-09-11 00:45:09 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:45:09.132968 | orchestrator | 2025-09-11 00:45:09 | INFO  | Task 9c7e6f10-f7d1-41cb-a45a-24fb48f722a0 is in state STARTED 2025-09-11 00:45:09.132979 | orchestrator | 2025-09-11 00:45:09 | INFO  | Task 8b271ae0-cbf5-4be9-8d89-6f0e25385dfe is in state STARTED 2025-09-11 00:45:09.132990 | orchestrator | 2025-09-11 00:45:09 | INFO  | Task 27629765-83a5-495f-bc65-0ba25cd68ecb is in state STARTED 2025-09-11 00:45:09.133002 | orchestrator | 2025-09-11 00:45:09 | INFO  | Task 08e0d8d4-a4e5-494f-9912-73c71af3bb3a is in state STARTED 2025-09-11 00:45:09.133013 | orchestrator | 2025-09-11 00:45:09 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:45:12.167312 | orchestrator | 2025-09-11 00:45:12 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:45:12.167397 | orchestrator | 2025-09-11 00:45:12 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:45:12.167651 | orchestrator | 2025-09-11 00:45:12 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:45:12.168428 | orchestrator | 2025-09-11 00:45:12 | INFO  | Task 9c7e6f10-f7d1-41cb-a45a-24fb48f722a0 is in state STARTED 2025-09-11 00:45:12.168711 | orchestrator | 2025-09-11 00:45:12 | INFO  | Task 8b271ae0-cbf5-4be9-8d89-6f0e25385dfe is in state STARTED 2025-09-11 00:45:12.169330 | orchestrator | 2025-09-11 00:45:12 | INFO  | Task 27629765-83a5-495f-bc65-0ba25cd68ecb is in state STARTED 2025-09-11 00:45:12.169932 | orchestrator | 2025-09-11 00:45:12 | INFO  | Task 08e0d8d4-a4e5-494f-9912-73c71af3bb3a is in state STARTED 2025-09-11 00:45:12.169954 | orchestrator | 2025-09-11 00:45:12 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:45:15.206635 | orchestrator | 2025-09-11 00:45:15 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:45:15.206739 | orchestrator | 2025-09-11 00:45:15 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:45:15.207007 | orchestrator | 2025-09-11 00:45:15 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:45:15.207485 | orchestrator | 2025-09-11 00:45:15 | INFO  | Task 9c7e6f10-f7d1-41cb-a45a-24fb48f722a0 is in state STARTED 2025-09-11 00:45:15.207995 | orchestrator | 2025-09-11 00:45:15 | INFO  | Task 8b271ae0-cbf5-4be9-8d89-6f0e25385dfe is in state STARTED 2025-09-11 00:45:15.208449 | orchestrator | 2025-09-11 00:45:15 | INFO  | Task 27629765-83a5-495f-bc65-0ba25cd68ecb is in state STARTED 2025-09-11 00:45:15.208845 | orchestrator | 2025-09-11 00:45:15 | INFO  | Task 08e0d8d4-a4e5-494f-9912-73c71af3bb3a is in state SUCCESS 2025-09-11 00:45:15.208909 | orchestrator | 2025-09-11 00:45:15 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:45:18.243304 | orchestrator | 2025-09-11 00:45:18 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:45:18.243378 | orchestrator | 2025-09-11 00:45:18 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:45:18.243682 | orchestrator | 2025-09-11 00:45:18 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:45:18.245659 | orchestrator | 2025-09-11 00:45:18 | INFO  | Task 9c7e6f10-f7d1-41cb-a45a-24fb48f722a0 is in state STARTED 2025-09-11 00:45:18.245946 | orchestrator | 2025-09-11 00:45:18 | INFO  | Task 8b271ae0-cbf5-4be9-8d89-6f0e25385dfe is in state STARTED 2025-09-11 00:45:18.247373 | orchestrator | 2025-09-11 00:45:18 | INFO  | Task 27629765-83a5-495f-bc65-0ba25cd68ecb is in state STARTED 2025-09-11 00:45:18.247396 | orchestrator | 2025-09-11 00:45:18 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:45:21.301516 | orchestrator | 2025-09-11 00:45:21 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:45:21.301604 | orchestrator | 2025-09-11 00:45:21 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:45:21.301733 | orchestrator | 2025-09-11 00:45:21 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:45:21.302544 | orchestrator | 2025-09-11 00:45:21 | INFO  | Task 9c7e6f10-f7d1-41cb-a45a-24fb48f722a0 is in state STARTED 2025-09-11 00:45:21.303544 | orchestrator | 2025-09-11 00:45:21 | INFO  | Task 8b271ae0-cbf5-4be9-8d89-6f0e25385dfe is in state STARTED 2025-09-11 00:45:21.303928 | orchestrator | 2025-09-11 00:45:21 | INFO  | Task 27629765-83a5-495f-bc65-0ba25cd68ecb is in state STARTED 2025-09-11 00:45:21.304035 | orchestrator | 2025-09-11 00:45:21 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:45:24.342946 | orchestrator | 2025-09-11 00:45:24 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:45:24.343032 | orchestrator | 2025-09-11 00:45:24 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:45:24.343047 | orchestrator | 2025-09-11 00:45:24 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:45:24.343059 | orchestrator | 2025-09-11 00:45:24 | INFO  | Task 9c7e6f10-f7d1-41cb-a45a-24fb48f722a0 is in state STARTED 2025-09-11 00:45:24.343070 | orchestrator | 2025-09-11 00:45:24 | INFO  | Task 8b271ae0-cbf5-4be9-8d89-6f0e25385dfe is in state SUCCESS 2025-09-11 00:45:24.343081 | orchestrator | 2025-09-11 00:45:24 | INFO  | Task 27629765-83a5-495f-bc65-0ba25cd68ecb is in state STARTED 2025-09-11 00:45:24.343092 | orchestrator | 2025-09-11 00:45:24 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:45:27.402169 | orchestrator | 2025-09-11 00:45:27 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:45:27.402901 | orchestrator | 2025-09-11 00:45:27 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:45:27.403791 | orchestrator | 2025-09-11 00:45:27 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:45:27.405260 | orchestrator | 2025-09-11 00:45:27 | INFO  | Task 9c7e6f10-f7d1-41cb-a45a-24fb48f722a0 is in state STARTED 2025-09-11 00:45:27.406565 | orchestrator | 2025-09-11 00:45:27 | INFO  | Task 27629765-83a5-495f-bc65-0ba25cd68ecb is in state STARTED 2025-09-11 00:45:27.406593 | orchestrator | 2025-09-11 00:45:27 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:45:30.477565 | orchestrator | 2025-09-11 00:45:30 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:45:30.478100 | orchestrator | 2025-09-11 00:45:30 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:45:30.479227 | orchestrator | 2025-09-11 00:45:30 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:45:30.480024 | orchestrator | 2025-09-11 00:45:30 | INFO  | Task 9c7e6f10-f7d1-41cb-a45a-24fb48f722a0 is in state STARTED 2025-09-11 00:45:30.480957 | orchestrator | 2025-09-11 00:45:30 | INFO  | Task 27629765-83a5-495f-bc65-0ba25cd68ecb is in state STARTED 2025-09-11 00:45:30.481054 | orchestrator | 2025-09-11 00:45:30 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:45:33.524366 | orchestrator | 2025-09-11 00:45:33 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:45:33.524463 | orchestrator | 2025-09-11 00:45:33 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:45:33.524488 | orchestrator | 2025-09-11 00:45:33 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:45:33.524509 | orchestrator | 2025-09-11 00:45:33 | INFO  | Task 9c7e6f10-f7d1-41cb-a45a-24fb48f722a0 is in state STARTED 2025-09-11 00:45:33.524529 | orchestrator | 2025-09-11 00:45:33 | INFO  | Task 27629765-83a5-495f-bc65-0ba25cd68ecb is in state STARTED 2025-09-11 00:45:33.524549 | orchestrator | 2025-09-11 00:45:33 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:45:36.571705 | orchestrator | 2025-09-11 00:45:36 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:45:36.573902 | orchestrator | 2025-09-11 00:45:36 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:45:36.575927 | orchestrator | 2025-09-11 00:45:36 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:45:36.577857 | orchestrator | 2025-09-11 00:45:36 | INFO  | Task 9c7e6f10-f7d1-41cb-a45a-24fb48f722a0 is in state STARTED 2025-09-11 00:45:36.580127 | orchestrator | 2025-09-11 00:45:36 | INFO  | Task 27629765-83a5-495f-bc65-0ba25cd68ecb is in state STARTED 2025-09-11 00:45:36.580561 | orchestrator | 2025-09-11 00:45:36 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:45:39.644532 | orchestrator | 2025-09-11 00:45:39 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:45:39.646672 | orchestrator | 2025-09-11 00:45:39 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:45:39.648342 | orchestrator | 2025-09-11 00:45:39 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:45:39.649236 | orchestrator | 2025-09-11 00:45:39 | INFO  | Task 9c7e6f10-f7d1-41cb-a45a-24fb48f722a0 is in state STARTED 2025-09-11 00:45:39.651954 | orchestrator | 2025-09-11 00:45:39 | INFO  | Task 27629765-83a5-495f-bc65-0ba25cd68ecb is in state STARTED 2025-09-11 00:45:39.651982 | orchestrator | 2025-09-11 00:45:39 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:45:42.700345 | orchestrator | 2025-09-11 00:45:42 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:45:42.701755 | orchestrator | 2025-09-11 00:45:42 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:45:42.702419 | orchestrator | 2025-09-11 00:45:42 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:45:42.703094 | orchestrator | 2025-09-11 00:45:42 | INFO  | Task 9c7e6f10-f7d1-41cb-a45a-24fb48f722a0 is in state STARTED 2025-09-11 00:45:42.703895 | orchestrator | 2025-09-11 00:45:42 | INFO  | Task 27629765-83a5-495f-bc65-0ba25cd68ecb is in state STARTED 2025-09-11 00:45:42.704846 | orchestrator | 2025-09-11 00:45:42 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:45:45.747316 | orchestrator | 2025-09-11 00:45:45 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:45:45.749515 | orchestrator | 2025-09-11 00:45:45 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:45:45.751589 | orchestrator | 2025-09-11 00:45:45 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:45:45.752965 | orchestrator | 2025-09-11 00:45:45 | INFO  | Task 9c7e6f10-f7d1-41cb-a45a-24fb48f722a0 is in state SUCCESS 2025-09-11 00:45:45.755517 | orchestrator | 2025-09-11 00:45:45.755577 | orchestrator | 2025-09-11 00:45:45.755590 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-09-11 00:45:45.755602 | orchestrator | 2025-09-11 00:45:45.755612 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-09-11 00:45:45.755623 | orchestrator | Thursday 11 September 2025 00:44:40 +0000 (0:00:00.429) 0:00:00.429 **** 2025-09-11 00:45:45.755633 | orchestrator | ok: [testbed-manager] => { 2025-09-11 00:45:45.755723 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-09-11 00:45:45.755737 | orchestrator | } 2025-09-11 00:45:45.755748 | orchestrator | 2025-09-11 00:45:45.755773 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-09-11 00:45:45.755783 | orchestrator | Thursday 11 September 2025 00:44:40 +0000 (0:00:00.280) 0:00:00.710 **** 2025-09-11 00:45:45.755793 | orchestrator | ok: [testbed-manager] 2025-09-11 00:45:45.755804 | orchestrator | 2025-09-11 00:45:45.755813 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-09-11 00:45:45.755852 | orchestrator | Thursday 11 September 2025 00:44:42 +0000 (0:00:01.249) 0:00:01.960 **** 2025-09-11 00:45:45.755869 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-09-11 00:45:45.755886 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-09-11 00:45:45.755902 | orchestrator | 2025-09-11 00:45:45.755918 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-09-11 00:45:45.755936 | orchestrator | Thursday 11 September 2025 00:44:43 +0000 (0:00:01.452) 0:00:03.412 **** 2025-09-11 00:45:45.755953 | orchestrator | changed: [testbed-manager] 2025-09-11 00:45:45.755970 | orchestrator | 2025-09-11 00:45:45.755981 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-09-11 00:45:45.755990 | orchestrator | Thursday 11 September 2025 00:44:45 +0000 (0:00:02.134) 0:00:05.546 **** 2025-09-11 00:45:45.756000 | orchestrator | changed: [testbed-manager] 2025-09-11 00:45:45.756010 | orchestrator | 2025-09-11 00:45:45.756019 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-09-11 00:45:45.756029 | orchestrator | Thursday 11 September 2025 00:44:46 +0000 (0:00:01.326) 0:00:06.872 **** 2025-09-11 00:45:45.756039 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-09-11 00:45:45.756049 | orchestrator | ok: [testbed-manager] 2025-09-11 00:45:45.756058 | orchestrator | 2025-09-11 00:45:45.756068 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-09-11 00:45:45.756077 | orchestrator | Thursday 11 September 2025 00:45:10 +0000 (0:00:23.981) 0:00:30.854 **** 2025-09-11 00:45:45.756087 | orchestrator | changed: [testbed-manager] 2025-09-11 00:45:45.756096 | orchestrator | 2025-09-11 00:45:45.756106 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:45:45.756135 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:45:45.756147 | orchestrator | 2025-09-11 00:45:45.756157 | orchestrator | 2025-09-11 00:45:45.756166 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:45:45.756176 | orchestrator | Thursday 11 September 2025 00:45:12 +0000 (0:00:01.883) 0:00:32.738 **** 2025-09-11 00:45:45.756185 | orchestrator | =============================================================================== 2025-09-11 00:45:45.756194 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 23.98s 2025-09-11 00:45:45.756204 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.13s 2025-09-11 00:45:45.756213 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 1.88s 2025-09-11 00:45:45.756222 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.45s 2025-09-11 00:45:45.756232 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.33s 2025-09-11 00:45:45.756241 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.24s 2025-09-11 00:45:45.756250 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.29s 2025-09-11 00:45:45.756260 | orchestrator | 2025-09-11 00:45:45.756269 | orchestrator | 2025-09-11 00:45:45.756279 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-09-11 00:45:45.756288 | orchestrator | 2025-09-11 00:45:45.756297 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-09-11 00:45:45.756307 | orchestrator | Thursday 11 September 2025 00:44:40 +0000 (0:00:00.248) 0:00:00.248 **** 2025-09-11 00:45:45.756316 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-09-11 00:45:45.756328 | orchestrator | 2025-09-11 00:45:45.756337 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-09-11 00:45:45.756347 | orchestrator | Thursday 11 September 2025 00:44:40 +0000 (0:00:00.281) 0:00:00.530 **** 2025-09-11 00:45:45.756358 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-09-11 00:45:45.756369 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-09-11 00:45:45.756380 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-09-11 00:45:45.756391 | orchestrator | 2025-09-11 00:45:45.756401 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-09-11 00:45:45.756412 | orchestrator | Thursday 11 September 2025 00:44:43 +0000 (0:00:02.120) 0:00:02.650 **** 2025-09-11 00:45:45.756422 | orchestrator | changed: [testbed-manager] 2025-09-11 00:45:45.756433 | orchestrator | 2025-09-11 00:45:45.756444 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-09-11 00:45:45.756459 | orchestrator | Thursday 11 September 2025 00:44:44 +0000 (0:00:01.594) 0:00:04.245 **** 2025-09-11 00:45:45.756492 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-09-11 00:45:45.756509 | orchestrator | ok: [testbed-manager] 2025-09-11 00:45:45.756525 | orchestrator | 2025-09-11 00:45:45.756542 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-09-11 00:45:45.756559 | orchestrator | Thursday 11 September 2025 00:45:15 +0000 (0:00:30.729) 0:00:34.975 **** 2025-09-11 00:45:45.756575 | orchestrator | changed: [testbed-manager] 2025-09-11 00:45:45.756593 | orchestrator | 2025-09-11 00:45:45.756610 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-09-11 00:45:45.756634 | orchestrator | Thursday 11 September 2025 00:45:16 +0000 (0:00:01.179) 0:00:36.154 **** 2025-09-11 00:45:45.756650 | orchestrator | ok: [testbed-manager] 2025-09-11 00:45:45.756662 | orchestrator | 2025-09-11 00:45:45.756673 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-09-11 00:45:45.756694 | orchestrator | Thursday 11 September 2025 00:45:17 +0000 (0:00:00.576) 0:00:36.730 **** 2025-09-11 00:45:45.756705 | orchestrator | changed: [testbed-manager] 2025-09-11 00:45:45.756715 | orchestrator | 2025-09-11 00:45:45.756724 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-09-11 00:45:45.756733 | orchestrator | Thursday 11 September 2025 00:45:20 +0000 (0:00:02.817) 0:00:39.548 **** 2025-09-11 00:45:45.756743 | orchestrator | changed: [testbed-manager] 2025-09-11 00:45:45.756752 | orchestrator | 2025-09-11 00:45:45.756762 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-09-11 00:45:45.756771 | orchestrator | Thursday 11 September 2025 00:45:21 +0000 (0:00:01.522) 0:00:41.070 **** 2025-09-11 00:45:45.756781 | orchestrator | changed: [testbed-manager] 2025-09-11 00:45:45.756790 | orchestrator | 2025-09-11 00:45:45.756799 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-09-11 00:45:45.756809 | orchestrator | Thursday 11 September 2025 00:45:22 +0000 (0:00:00.659) 0:00:41.730 **** 2025-09-11 00:45:45.756818 | orchestrator | ok: [testbed-manager] 2025-09-11 00:45:45.756853 | orchestrator | 2025-09-11 00:45:45.756863 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:45:45.756873 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:45:45.756882 | orchestrator | 2025-09-11 00:45:45.756892 | orchestrator | 2025-09-11 00:45:45.756901 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:45:45.756911 | orchestrator | Thursday 11 September 2025 00:45:23 +0000 (0:00:00.856) 0:00:42.587 **** 2025-09-11 00:45:45.756920 | orchestrator | =============================================================================== 2025-09-11 00:45:45.756930 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 30.73s 2025-09-11 00:45:45.756939 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.82s 2025-09-11 00:45:45.756948 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.12s 2025-09-11 00:45:45.756958 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.59s 2025-09-11 00:45:45.756967 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.52s 2025-09-11 00:45:45.756976 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.18s 2025-09-11 00:45:45.756986 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.86s 2025-09-11 00:45:45.756995 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.66s 2025-09-11 00:45:45.757005 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.58s 2025-09-11 00:45:45.757014 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.28s 2025-09-11 00:45:45.757023 | orchestrator | 2025-09-11 00:45:45.757033 | orchestrator | 2025-09-11 00:45:45.757042 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-11 00:45:45.757052 | orchestrator | 2025-09-11 00:45:45.757061 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-11 00:45:45.757071 | orchestrator | Thursday 11 September 2025 00:44:42 +0000 (0:00:00.744) 0:00:00.744 **** 2025-09-11 00:45:45.757080 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-09-11 00:45:45.757090 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-09-11 00:45:45.757099 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-09-11 00:45:45.757109 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-09-11 00:45:45.757118 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-09-11 00:45:45.757127 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-09-11 00:45:45.757137 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-09-11 00:45:45.757146 | orchestrator | 2025-09-11 00:45:45.757156 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-09-11 00:45:45.757171 | orchestrator | 2025-09-11 00:45:45.757181 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-09-11 00:45:45.757190 | orchestrator | Thursday 11 September 2025 00:44:43 +0000 (0:00:01.295) 0:00:02.040 **** 2025-09-11 00:45:45.757213 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:45:45.757230 | orchestrator | 2025-09-11 00:45:45.757240 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-09-11 00:45:45.757250 | orchestrator | Thursday 11 September 2025 00:44:45 +0000 (0:00:02.071) 0:00:04.112 **** 2025-09-11 00:45:45.757266 | orchestrator | ok: [testbed-manager] 2025-09-11 00:45:45.757283 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:45:45.757297 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:45:45.757313 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:45:45.757328 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:45:45.757353 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:45:45.757368 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:45:45.757382 | orchestrator | 2025-09-11 00:45:45.757399 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-09-11 00:45:45.757415 | orchestrator | Thursday 11 September 2025 00:44:47 +0000 (0:00:02.186) 0:00:06.299 **** 2025-09-11 00:45:45.757432 | orchestrator | ok: [testbed-manager] 2025-09-11 00:45:45.757444 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:45:45.757457 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:45:45.757473 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:45:45.757489 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:45:45.757503 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:45:45.757525 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:45:45.757540 | orchestrator | 2025-09-11 00:45:45.757554 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-09-11 00:45:45.757569 | orchestrator | Thursday 11 September 2025 00:44:51 +0000 (0:00:03.982) 0:00:10.281 **** 2025-09-11 00:45:45.757583 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:45:45.757599 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:45:45.757613 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:45:45.757629 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:45:45.757644 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:45:45.757659 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:45:45.757674 | orchestrator | changed: [testbed-manager] 2025-09-11 00:45:45.757691 | orchestrator | 2025-09-11 00:45:45.757707 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-09-11 00:45:45.757724 | orchestrator | Thursday 11 September 2025 00:44:54 +0000 (0:00:02.247) 0:00:12.529 **** 2025-09-11 00:45:45.757741 | orchestrator | changed: [testbed-manager] 2025-09-11 00:45:45.757757 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:45:45.757774 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:45:45.757789 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:45:45.757804 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:45:45.757818 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:45:45.757861 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:45:45.757878 | orchestrator | 2025-09-11 00:45:45.757893 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-09-11 00:45:45.757908 | orchestrator | Thursday 11 September 2025 00:45:03 +0000 (0:00:09.709) 0:00:22.239 **** 2025-09-11 00:45:45.757922 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:45:45.757937 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:45:45.757952 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:45:45.757967 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:45:45.757982 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:45:45.757998 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:45:45.758013 | orchestrator | changed: [testbed-manager] 2025-09-11 00:45:45.758138 | orchestrator | 2025-09-11 00:45:45.758157 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-09-11 00:45:45.758170 | orchestrator | Thursday 11 September 2025 00:45:23 +0000 (0:00:19.382) 0:00:41.622 **** 2025-09-11 00:45:45.758181 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:45:45.758193 | orchestrator | 2025-09-11 00:45:45.758202 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-09-11 00:45:45.758212 | orchestrator | Thursday 11 September 2025 00:45:24 +0000 (0:00:01.280) 0:00:42.903 **** 2025-09-11 00:45:45.758222 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-09-11 00:45:45.758232 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-09-11 00:45:45.758241 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-09-11 00:45:45.758251 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-09-11 00:45:45.758260 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-09-11 00:45:45.758269 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-09-11 00:45:45.758279 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-09-11 00:45:45.758288 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-09-11 00:45:45.758297 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-09-11 00:45:45.758307 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-09-11 00:45:45.758316 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-09-11 00:45:45.758325 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-09-11 00:45:45.758334 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-09-11 00:45:45.758344 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-09-11 00:45:45.758353 | orchestrator | 2025-09-11 00:45:45.758363 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-09-11 00:45:45.758374 | orchestrator | Thursday 11 September 2025 00:45:29 +0000 (0:00:05.231) 0:00:48.134 **** 2025-09-11 00:45:45.758383 | orchestrator | ok: [testbed-manager] 2025-09-11 00:45:45.758393 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:45:45.758402 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:45:45.758411 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:45:45.758421 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:45:45.758430 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:45:45.758445 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:45:45.758461 | orchestrator | 2025-09-11 00:45:45.758477 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-09-11 00:45:45.758493 | orchestrator | Thursday 11 September 2025 00:45:31 +0000 (0:00:01.309) 0:00:49.444 **** 2025-09-11 00:45:45.758507 | orchestrator | changed: [testbed-manager] 2025-09-11 00:45:45.758522 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:45:45.758538 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:45:45.758551 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:45:45.758566 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:45:45.758583 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:45:45.758600 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:45:45.758616 | orchestrator | 2025-09-11 00:45:45.758633 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-09-11 00:45:45.758667 | orchestrator | Thursday 11 September 2025 00:45:32 +0000 (0:00:01.679) 0:00:51.123 **** 2025-09-11 00:45:45.758685 | orchestrator | ok: [testbed-manager] 2025-09-11 00:45:45.758702 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:45:45.758719 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:45:45.758729 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:45:45.758738 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:45:45.758748 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:45:45.758757 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:45:45.758766 | orchestrator | 2025-09-11 00:45:45.758785 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-09-11 00:45:45.758795 | orchestrator | Thursday 11 September 2025 00:45:34 +0000 (0:00:01.665) 0:00:52.788 **** 2025-09-11 00:45:45.758811 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:45:45.758881 | orchestrator | ok: [testbed-manager] 2025-09-11 00:45:45.758895 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:45:45.758905 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:45:45.758914 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:45:45.758923 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:45:45.758933 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:45:45.758942 | orchestrator | 2025-09-11 00:45:45.758952 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-09-11 00:45:45.758961 | orchestrator | Thursday 11 September 2025 00:45:37 +0000 (0:00:02.622) 0:00:55.411 **** 2025-09-11 00:45:45.758971 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-09-11 00:45:45.758986 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:45:45.759004 | orchestrator | 2025-09-11 00:45:45.759020 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-09-11 00:45:45.759037 | orchestrator | Thursday 11 September 2025 00:45:38 +0000 (0:00:01.100) 0:00:56.511 **** 2025-09-11 00:45:45.759054 | orchestrator | changed: [testbed-manager] 2025-09-11 00:45:45.759072 | orchestrator | 2025-09-11 00:45:45.759090 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-09-11 00:45:45.759106 | orchestrator | Thursday 11 September 2025 00:45:40 +0000 (0:00:01.972) 0:00:58.484 **** 2025-09-11 00:45:45.759122 | orchestrator | changed: [testbed-manager] 2025-09-11 00:45:45.759132 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:45:45.759142 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:45:45.759151 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:45:45.759160 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:45:45.759170 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:45:45.759179 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:45:45.759188 | orchestrator | 2025-09-11 00:45:45.759198 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:45:45.759207 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:45:45.759218 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:45:45.759228 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:45:45.759237 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:45:45.759247 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:45:45.759257 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:45:45.759266 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:45:45.759275 | orchestrator | 2025-09-11 00:45:45.759285 | orchestrator | 2025-09-11 00:45:45.759294 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:45:45.759304 | orchestrator | Thursday 11 September 2025 00:45:43 +0000 (0:00:03.177) 0:01:01.662 **** 2025-09-11 00:45:45.759313 | orchestrator | =============================================================================== 2025-09-11 00:45:45.759331 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 19.38s 2025-09-11 00:45:45.759341 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.71s 2025-09-11 00:45:45.759350 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.23s 2025-09-11 00:45:45.759359 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.97s 2025-09-11 00:45:45.759369 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.18s 2025-09-11 00:45:45.759378 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.62s 2025-09-11 00:45:45.759387 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.25s 2025-09-11 00:45:45.759397 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.19s 2025-09-11 00:45:45.759406 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.07s 2025-09-11 00:45:45.759416 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.97s 2025-09-11 00:45:45.759425 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.68s 2025-09-11 00:45:45.759443 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.67s 2025-09-11 00:45:45.759453 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.31s 2025-09-11 00:45:45.759463 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.30s 2025-09-11 00:45:45.759472 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.28s 2025-09-11 00:45:45.759482 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.10s 2025-09-11 00:45:45.759492 | orchestrator | 2025-09-11 00:45:45 | INFO  | Task 27629765-83a5-495f-bc65-0ba25cd68ecb is in state STARTED 2025-09-11 00:45:45.759502 | orchestrator | 2025-09-11 00:45:45 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:45:48.801803 | orchestrator | 2025-09-11 00:45:48 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:45:48.802121 | orchestrator | 2025-09-11 00:45:48 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:45:48.802167 | orchestrator | 2025-09-11 00:45:48 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:45:48.804460 | orchestrator | 2025-09-11 00:45:48 | INFO  | Task 27629765-83a5-495f-bc65-0ba25cd68ecb is in state STARTED 2025-09-11 00:45:48.804486 | orchestrator | 2025-09-11 00:45:48 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:45:51.847529 | orchestrator | 2025-09-11 00:45:51 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:45:51.847631 | orchestrator | 2025-09-11 00:45:51 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:45:51.848116 | orchestrator | 2025-09-11 00:45:51 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:45:51.849550 | orchestrator | 2025-09-11 00:45:51 | INFO  | Task 27629765-83a5-495f-bc65-0ba25cd68ecb is in state STARTED 2025-09-11 00:45:51.849572 | orchestrator | 2025-09-11 00:45:51 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:45:54.897309 | orchestrator | 2025-09-11 00:45:54 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:45:54.898605 | orchestrator | 2025-09-11 00:45:54 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:45:54.900131 | orchestrator | 2025-09-11 00:45:54 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:45:54.901934 | orchestrator | 2025-09-11 00:45:54 | INFO  | Task 27629765-83a5-495f-bc65-0ba25cd68ecb is in state STARTED 2025-09-11 00:45:54.901987 | orchestrator | 2025-09-11 00:45:54 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:45:57.948851 | orchestrator | 2025-09-11 00:45:57 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:45:57.950664 | orchestrator | 2025-09-11 00:45:57 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:45:57.952322 | orchestrator | 2025-09-11 00:45:57 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:45:57.953510 | orchestrator | 2025-09-11 00:45:57 | INFO  | Task 27629765-83a5-495f-bc65-0ba25cd68ecb is in state STARTED 2025-09-11 00:45:57.953542 | orchestrator | 2025-09-11 00:45:57 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:46:00.996928 | orchestrator | 2025-09-11 00:46:00 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:46:00.997666 | orchestrator | 2025-09-11 00:46:00 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:46:00.999382 | orchestrator | 2025-09-11 00:46:01 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:46:01.000269 | orchestrator | 2025-09-11 00:46:01 | INFO  | Task 27629765-83a5-495f-bc65-0ba25cd68ecb is in state SUCCESS 2025-09-11 00:46:01.000508 | orchestrator | 2025-09-11 00:46:01 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:46:04.049222 | orchestrator | 2025-09-11 00:46:04 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:46:04.049762 | orchestrator | 2025-09-11 00:46:04 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:46:04.050906 | orchestrator | 2025-09-11 00:46:04 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:46:04.050942 | orchestrator | 2025-09-11 00:46:04 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:46:07.098393 | orchestrator | 2025-09-11 00:46:07 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:46:07.099927 | orchestrator | 2025-09-11 00:46:07 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:46:07.102888 | orchestrator | 2025-09-11 00:46:07 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:46:07.102924 | orchestrator | 2025-09-11 00:46:07 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:46:10.147294 | orchestrator | 2025-09-11 00:46:10 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:46:10.148392 | orchestrator | 2025-09-11 00:46:10 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:46:10.151290 | orchestrator | 2025-09-11 00:46:10 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:46:10.151333 | orchestrator | 2025-09-11 00:46:10 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:46:13.194492 | orchestrator | 2025-09-11 00:46:13 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:46:13.196229 | orchestrator | 2025-09-11 00:46:13 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:46:13.197953 | orchestrator | 2025-09-11 00:46:13 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:46:13.198097 | orchestrator | 2025-09-11 00:46:13 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:46:16.253392 | orchestrator | 2025-09-11 00:46:16 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:46:16.256388 | orchestrator | 2025-09-11 00:46:16 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:46:16.258631 | orchestrator | 2025-09-11 00:46:16 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:46:16.258648 | orchestrator | 2025-09-11 00:46:16 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:46:19.292430 | orchestrator | 2025-09-11 00:46:19 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:46:19.294436 | orchestrator | 2025-09-11 00:46:19 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:46:19.297040 | orchestrator | 2025-09-11 00:46:19 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:46:19.297117 | orchestrator | 2025-09-11 00:46:19 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:46:22.341494 | orchestrator | 2025-09-11 00:46:22 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:46:22.344573 | orchestrator | 2025-09-11 00:46:22 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:46:22.344610 | orchestrator | 2025-09-11 00:46:22 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:46:22.344811 | orchestrator | 2025-09-11 00:46:22 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:46:25.392749 | orchestrator | 2025-09-11 00:46:25 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:46:25.394405 | orchestrator | 2025-09-11 00:46:25 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:46:25.396435 | orchestrator | 2025-09-11 00:46:25 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:46:25.396525 | orchestrator | 2025-09-11 00:46:25 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:46:28.454988 | orchestrator | 2025-09-11 00:46:28 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:46:28.455505 | orchestrator | 2025-09-11 00:46:28 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:46:28.457093 | orchestrator | 2025-09-11 00:46:28 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:46:28.457132 | orchestrator | 2025-09-11 00:46:28 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:46:31.511379 | orchestrator | 2025-09-11 00:46:31 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:46:31.513381 | orchestrator | 2025-09-11 00:46:31 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:46:31.515811 | orchestrator | 2025-09-11 00:46:31 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:46:31.515855 | orchestrator | 2025-09-11 00:46:31 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:46:34.551670 | orchestrator | 2025-09-11 00:46:34 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:46:34.551822 | orchestrator | 2025-09-11 00:46:34 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:46:34.552338 | orchestrator | 2025-09-11 00:46:34 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:46:34.552362 | orchestrator | 2025-09-11 00:46:34 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:46:37.598365 | orchestrator | 2025-09-11 00:46:37 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:46:37.599430 | orchestrator | 2025-09-11 00:46:37 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:46:37.601381 | orchestrator | 2025-09-11 00:46:37 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:46:37.601867 | orchestrator | 2025-09-11 00:46:37 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:46:40.641403 | orchestrator | 2025-09-11 00:46:40 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:46:40.642917 | orchestrator | 2025-09-11 00:46:40 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:46:40.644965 | orchestrator | 2025-09-11 00:46:40 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:46:40.645063 | orchestrator | 2025-09-11 00:46:40 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:46:43.691604 | orchestrator | 2025-09-11 00:46:43 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:46:43.692704 | orchestrator | 2025-09-11 00:46:43 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:46:43.693507 | orchestrator | 2025-09-11 00:46:43 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:46:43.693532 | orchestrator | 2025-09-11 00:46:43 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:46:46.733718 | orchestrator | 2025-09-11 00:46:46 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:46:46.735844 | orchestrator | 2025-09-11 00:46:46 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:46:46.737945 | orchestrator | 2025-09-11 00:46:46 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:46:46.737983 | orchestrator | 2025-09-11 00:46:46 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:46:49.773217 | orchestrator | 2025-09-11 00:46:49 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:46:49.775417 | orchestrator | 2025-09-11 00:46:49 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:46:49.777836 | orchestrator | 2025-09-11 00:46:49 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:46:49.778125 | orchestrator | 2025-09-11 00:46:49 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:46:52.818902 | orchestrator | 2025-09-11 00:46:52 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:46:52.819006 | orchestrator | 2025-09-11 00:46:52 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:46:52.820147 | orchestrator | 2025-09-11 00:46:52 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:46:52.820309 | orchestrator | 2025-09-11 00:46:52 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:46:55.867609 | orchestrator | 2025-09-11 00:46:55 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state STARTED 2025-09-11 00:46:55.869166 | orchestrator | 2025-09-11 00:46:55 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:46:55.871799 | orchestrator | 2025-09-11 00:46:55 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:46:55.871898 | orchestrator | 2025-09-11 00:46:55 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:46:58.908156 | orchestrator | 2025-09-11 00:46:58 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:46:58.911115 | orchestrator | 2025-09-11 00:46:58 | INFO  | Task dcc2aa51-a5b8-4070-9862-cef061486e98 is in state SUCCESS 2025-09-11 00:46:58.913027 | orchestrator | 2025-09-11 00:46:58.913063 | orchestrator | 2025-09-11 00:46:58.913073 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-09-11 00:46:58.913106 | orchestrator | 2025-09-11 00:46:58.913115 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-09-11 00:46:58.913123 | orchestrator | Thursday 11 September 2025 00:44:59 +0000 (0:00:00.185) 0:00:00.185 **** 2025-09-11 00:46:58.913131 | orchestrator | ok: [testbed-manager] 2025-09-11 00:46:58.913140 | orchestrator | 2025-09-11 00:46:58.913147 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-09-11 00:46:58.913155 | orchestrator | Thursday 11 September 2025 00:45:00 +0000 (0:00:00.781) 0:00:00.967 **** 2025-09-11 00:46:58.913163 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-09-11 00:46:58.913172 | orchestrator | 2025-09-11 00:46:58.913180 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-09-11 00:46:58.913188 | orchestrator | Thursday 11 September 2025 00:45:00 +0000 (0:00:00.502) 0:00:01.469 **** 2025-09-11 00:46:58.913196 | orchestrator | changed: [testbed-manager] 2025-09-11 00:46:58.913204 | orchestrator | 2025-09-11 00:46:58.913211 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-09-11 00:46:58.913225 | orchestrator | Thursday 11 September 2025 00:45:01 +0000 (0:00:00.992) 0:00:02.461 **** 2025-09-11 00:46:58.913233 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-09-11 00:46:58.913297 | orchestrator | ok: [testbed-manager] 2025-09-11 00:46:58.913306 | orchestrator | 2025-09-11 00:46:58.913314 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-09-11 00:46:58.913322 | orchestrator | Thursday 11 September 2025 00:45:50 +0000 (0:00:48.320) 0:00:50.782 **** 2025-09-11 00:46:58.913330 | orchestrator | changed: [testbed-manager] 2025-09-11 00:46:58.913338 | orchestrator | 2025-09-11 00:46:58.913345 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:46:58.913354 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:46:58.913364 | orchestrator | 2025-09-11 00:46:58.913371 | orchestrator | 2025-09-11 00:46:58.913379 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:46:58.913387 | orchestrator | Thursday 11 September 2025 00:45:59 +0000 (0:00:09.226) 0:01:00.009 **** 2025-09-11 00:46:58.913396 | orchestrator | =============================================================================== 2025-09-11 00:46:58.913404 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 48.32s 2025-09-11 00:46:58.913412 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 9.23s 2025-09-11 00:46:58.913420 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 0.99s 2025-09-11 00:46:58.913428 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.78s 2025-09-11 00:46:58.913435 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.50s 2025-09-11 00:46:58.913443 | orchestrator | 2025-09-11 00:46:58.913451 | orchestrator | 2025-09-11 00:46:58.913459 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-09-11 00:46:58.913467 | orchestrator | 2025-09-11 00:46:58.913475 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-11 00:46:58.913483 | orchestrator | Thursday 11 September 2025 00:44:34 +0000 (0:00:00.224) 0:00:00.224 **** 2025-09-11 00:46:58.913491 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:46:58.913500 | orchestrator | 2025-09-11 00:46:58.913508 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-09-11 00:46:58.913516 | orchestrator | Thursday 11 September 2025 00:44:35 +0000 (0:00:01.175) 0:00:01.399 **** 2025-09-11 00:46:58.913523 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-11 00:46:58.913531 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-11 00:46:58.913547 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-11 00:46:58.913555 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-11 00:46:58.913563 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-11 00:46:58.913571 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-11 00:46:58.913579 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-11 00:46:58.913586 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-11 00:46:58.913594 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-11 00:46:58.913604 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-11 00:46:58.913612 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-11 00:46:58.913619 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-11 00:46:58.913627 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-11 00:46:58.913635 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-11 00:46:58.913643 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-11 00:46:58.913651 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-11 00:46:58.913671 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-11 00:46:58.913679 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-11 00:46:58.913687 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-11 00:46:58.913695 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-11 00:46:58.913703 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-11 00:46:58.913710 | orchestrator | 2025-09-11 00:46:58.913718 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-11 00:46:58.913726 | orchestrator | Thursday 11 September 2025 00:44:39 +0000 (0:00:03.960) 0:00:05.360 **** 2025-09-11 00:46:58.913785 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:46:58.913796 | orchestrator | 2025-09-11 00:46:58.914380 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-09-11 00:46:58.914410 | orchestrator | Thursday 11 September 2025 00:44:41 +0000 (0:00:01.226) 0:00:06.586 **** 2025-09-11 00:46:58.914422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-11 00:46:58.914435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-11 00:46:58.914453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-11 00:46:58.914461 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-11 00:46:58.914469 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-11 00:46:58.914478 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-11 00:46:58.914520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.914534 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-11 00:46:58.914542 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.914551 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.914565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.914574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.914604 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.914615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.914640 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.914648 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.914657 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.914674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.914682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.914730 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.914759 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.914767 | orchestrator | 2025-09-11 00:46:58.914776 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-09-11 00:46:58.914784 | orchestrator | Thursday 11 September 2025 00:44:46 +0000 (0:00:05.609) 0:00:12.195 **** 2025-09-11 00:46:58.914820 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-11 00:46:58.914831 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:46:58.914846 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:46:58.914861 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:46:58.914870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-11 00:46:58.914879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:46:58.914888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:46:58.914897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-11 00:46:58.914906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:46:58.914922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:46:58.914931 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:46:58.914943 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-11 00:46:58.914952 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:46:58.914965 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:46:58.914976 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:46:58.914986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-11 00:46:58.914996 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:46:58.915006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:46:58.915016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:46:58.915026 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-11 00:46:58.915037 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:46:58.915054 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:46:58.915069 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:46:58.915083 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:46:58.915093 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-11 00:46:58.915103 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:46:58.915112 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:46:58.915122 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:46:58.915131 | orchestrator | 2025-09-11 00:46:58.915140 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-09-11 00:46:58.915149 | orchestrator | Thursday 11 September 2025 00:44:47 +0000 (0:00:00.996) 0:00:13.192 **** 2025-09-11 00:46:58.915158 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-11 00:46:58.915168 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:46:58.915183 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:46:58.915193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-11 00:46:58.915211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:46:58.915221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:46:58.915231 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:46:58.915240 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:46:58.915249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-11 00:46:58.915259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:46:58.915269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:46:58.915279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-11 00:46:58.915294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:46:58.915314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:46:58.915324 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-11 00:46:58.915333 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:46:58.915341 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:46:58.915350 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:46:58.915358 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:46:58.915365 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:46:58.915374 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-11 00:46:58.915382 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:46:58.915396 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:46:58.915409 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:46:58.915417 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-11 00:46:58.915429 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:46:58.915438 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:46:58.915446 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:46:58.915532 | orchestrator | 2025-09-11 00:46:58.915541 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-09-11 00:46:58.915549 | orchestrator | Thursday 11 September 2025 00:44:51 +0000 (0:00:03.434) 0:00:16.626 **** 2025-09-11 00:46:58.915557 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:46:58.915565 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:46:58.915573 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:46:58.915581 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:46:58.915589 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:46:58.915596 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:46:58.915604 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:46:58.915612 | orchestrator | 2025-09-11 00:46:58.915620 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-09-11 00:46:58.915628 | orchestrator | Thursday 11 September 2025 00:44:51 +0000 (0:00:00.653) 0:00:17.279 **** 2025-09-11 00:46:58.915636 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:46:58.915643 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:46:58.915651 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:46:58.915658 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:46:58.915666 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:46:58.915674 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:46:58.915682 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:46:58.915689 | orchestrator | 2025-09-11 00:46:58.915697 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-09-11 00:46:58.915705 | orchestrator | Thursday 11 September 2025 00:44:52 +0000 (0:00:01.097) 0:00:18.377 **** 2025-09-11 00:46:58.915713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-11 00:46:58.915730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-11 00:46:58.915764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-11 00:46:58.915773 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-11 00:46:58.915785 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-11 00:46:58.915794 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-11 00:46:58.915802 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-11 00:46:58.915811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.915827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.915836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.915849 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.915858 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.915866 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.915874 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.915882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.915904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.915913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.915931 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.915940 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.915953 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.915961 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.915969 | orchestrator | 2025-09-11 00:46:58.915977 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-09-11 00:46:58.915985 | orchestrator | Thursday 11 September 2025 00:44:58 +0000 (0:00:05.257) 0:00:23.635 **** 2025-09-11 00:46:58.915993 | orchestrator | [WARNING]: Skipped 2025-09-11 00:46:58.916002 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-09-11 00:46:58.916010 | orchestrator | to this access issue: 2025-09-11 00:46:58.916018 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-09-11 00:46:58.916026 | orchestrator | directory 2025-09-11 00:46:58.916034 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-11 00:46:58.916042 | orchestrator | 2025-09-11 00:46:58.916049 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-09-11 00:46:58.916057 | orchestrator | Thursday 11 September 2025 00:44:58 +0000 (0:00:00.939) 0:00:24.574 **** 2025-09-11 00:46:58.916065 | orchestrator | [WARNING]: Skipped 2025-09-11 00:46:58.916073 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-09-11 00:46:58.916087 | orchestrator | to this access issue: 2025-09-11 00:46:58.916095 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-09-11 00:46:58.916102 | orchestrator | directory 2025-09-11 00:46:58.916110 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-11 00:46:58.916118 | orchestrator | 2025-09-11 00:46:58.916126 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-09-11 00:46:58.916134 | orchestrator | Thursday 11 September 2025 00:45:00 +0000 (0:00:01.364) 0:00:25.939 **** 2025-09-11 00:46:58.916142 | orchestrator | [WARNING]: Skipped 2025-09-11 00:46:58.916150 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-09-11 00:46:58.916157 | orchestrator | to this access issue: 2025-09-11 00:46:58.916165 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-09-11 00:46:58.916173 | orchestrator | directory 2025-09-11 00:46:58.916181 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-11 00:46:58.916189 | orchestrator | 2025-09-11 00:46:58.916196 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-09-11 00:46:58.916204 | orchestrator | Thursday 11 September 2025 00:45:01 +0000 (0:00:01.178) 0:00:27.117 **** 2025-09-11 00:46:58.916212 | orchestrator | [WARNING]: Skipped 2025-09-11 00:46:58.916220 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-09-11 00:46:58.916228 | orchestrator | to this access issue: 2025-09-11 00:46:58.916236 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-09-11 00:46:58.916243 | orchestrator | directory 2025-09-11 00:46:58.916251 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-11 00:46:58.916259 | orchestrator | 2025-09-11 00:46:58.916267 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-09-11 00:46:58.916275 | orchestrator | Thursday 11 September 2025 00:45:02 +0000 (0:00:00.741) 0:00:27.859 **** 2025-09-11 00:46:58.916283 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:46:58.916291 | orchestrator | changed: [testbed-manager] 2025-09-11 00:46:58.916299 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:46:58.916307 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:46:58.916314 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:46:58.916322 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:46:58.916330 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:46:58.916337 | orchestrator | 2025-09-11 00:46:58.916345 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-09-11 00:46:58.916353 | orchestrator | Thursday 11 September 2025 00:45:06 +0000 (0:00:03.781) 0:00:31.640 **** 2025-09-11 00:46:58.916361 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-11 00:46:58.916369 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-11 00:46:58.916377 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-11 00:46:58.916389 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-11 00:46:58.916397 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-11 00:46:58.916405 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-11 00:46:58.916413 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-11 00:46:58.916420 | orchestrator | 2025-09-11 00:46:58.916428 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-09-11 00:46:58.916436 | orchestrator | Thursday 11 September 2025 00:45:09 +0000 (0:00:02.976) 0:00:34.617 **** 2025-09-11 00:46:58.916444 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:46:58.916452 | orchestrator | changed: [testbed-manager] 2025-09-11 00:46:58.916465 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:46:58.916473 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:46:58.916480 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:46:58.916561 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:46:58.916572 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:46:58.916580 | orchestrator | 2025-09-11 00:46:58.916587 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-09-11 00:46:58.916595 | orchestrator | Thursday 11 September 2025 00:45:11 +0000 (0:00:02.875) 0:00:37.492 **** 2025-09-11 00:46:58.916604 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-11 00:46:58.916612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:46:58.916621 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-11 00:46:58.916629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:46:58.916638 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-11 00:46:58.916653 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:46:58.916668 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.916684 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.916692 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-11 00:46:58.916701 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:46:58.916709 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-11 00:46:58.916718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:46:58.916726 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.916755 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-11 00:46:58.916773 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:46:58.916782 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-11 00:46:58.916790 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:46:58.916799 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.916807 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.916815 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.916823 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.916831 | orchestrator | 2025-09-11 00:46:58.916839 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-09-11 00:46:58.916847 | orchestrator | Thursday 11 September 2025 00:45:14 +0000 (0:00:02.429) 0:00:39.922 **** 2025-09-11 00:46:58.916855 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-11 00:46:58.916869 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-11 00:46:58.916877 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-11 00:46:58.916892 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-11 00:46:58.916901 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-11 00:46:58.916908 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-11 00:46:58.916916 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-11 00:46:58.916924 | orchestrator | 2025-09-11 00:46:58.916932 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-09-11 00:46:58.916940 | orchestrator | Thursday 11 September 2025 00:45:16 +0000 (0:00:02.349) 0:00:42.272 **** 2025-09-11 00:46:58.916947 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-11 00:46:58.916955 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-11 00:46:58.916966 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-11 00:46:58.916975 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-11 00:46:58.916982 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-11 00:46:58.916990 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-11 00:46:58.916998 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-11 00:46:58.917006 | orchestrator | 2025-09-11 00:46:58.917013 | orchestrator | TASK [common : Check common containers] **************************************** 2025-09-11 00:46:58.917021 | orchestrator | Thursday 11 September 2025 00:45:18 +0000 (0:00:02.159) 0:00:44.431 **** 2025-09-11 00:46:58.917029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-11 00:46:58.917038 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-11 00:46:58.917046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-11 00:46:58.917054 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-11 00:46:58.917068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-11 00:46:58.917082 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-11 00:46:58.917097 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-11 00:46:58.917105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.917113 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.917122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.917130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.917147 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.917167 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.917179 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.917188 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.917198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.917207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.917216 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.917231 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.917240 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.917250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:46:58.917259 | orchestrator | 2025-09-11 00:46:58.917271 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-09-11 00:46:58.917281 | orchestrator | Thursday 11 September 2025 00:45:22 +0000 (0:00:03.525) 0:00:47.957 **** 2025-09-11 00:46:58.917290 | orchestrator | changed: [testbed-manager] 2025-09-11 00:46:58.917300 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:46:58.917309 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:46:58.917317 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:46:58.917326 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:46:58.917335 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:46:58.917410 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:46:58.917422 | orchestrator | 2025-09-11 00:46:58.917431 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-09-11 00:46:58.917440 | orchestrator | Thursday 11 September 2025 00:45:23 +0000 (0:00:01.455) 0:00:49.412 **** 2025-09-11 00:46:58.917449 | orchestrator | changed: [testbed-manager] 2025-09-11 00:46:58.917458 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:46:58.917466 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:46:58.917475 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:46:58.917484 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:46:58.917493 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:46:58.917502 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:46:58.917511 | orchestrator | 2025-09-11 00:46:58.917520 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-11 00:46:58.917529 | orchestrator | Thursday 11 September 2025 00:45:24 +0000 (0:00:01.148) 0:00:50.560 **** 2025-09-11 00:46:58.917537 | orchestrator | 2025-09-11 00:46:58.917547 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-11 00:46:58.917555 | orchestrator | Thursday 11 September 2025 00:45:25 +0000 (0:00:00.092) 0:00:50.652 **** 2025-09-11 00:46:58.917563 | orchestrator | 2025-09-11 00:46:58.917571 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-11 00:46:58.917579 | orchestrator | Thursday 11 September 2025 00:45:25 +0000 (0:00:00.069) 0:00:50.722 **** 2025-09-11 00:46:58.917586 | orchestrator | 2025-09-11 00:46:58.917594 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-11 00:46:58.917602 | orchestrator | Thursday 11 September 2025 00:45:25 +0000 (0:00:00.054) 0:00:50.776 **** 2025-09-11 00:46:58.917610 | orchestrator | 2025-09-11 00:46:58.917618 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-11 00:46:58.917625 | orchestrator | Thursday 11 September 2025 00:45:25 +0000 (0:00:00.169) 0:00:50.946 **** 2025-09-11 00:46:58.917633 | orchestrator | 2025-09-11 00:46:58.917648 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-11 00:46:58.917655 | orchestrator | Thursday 11 September 2025 00:45:25 +0000 (0:00:00.050) 0:00:50.997 **** 2025-09-11 00:46:58.917663 | orchestrator | 2025-09-11 00:46:58.917671 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-11 00:46:58.917678 | orchestrator | Thursday 11 September 2025 00:45:25 +0000 (0:00:00.077) 0:00:51.074 **** 2025-09-11 00:46:58.917686 | orchestrator | 2025-09-11 00:46:58.917694 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-09-11 00:46:58.917702 | orchestrator | Thursday 11 September 2025 00:45:25 +0000 (0:00:00.082) 0:00:51.157 **** 2025-09-11 00:46:58.917710 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:46:58.917717 | orchestrator | changed: [testbed-manager] 2025-09-11 00:46:58.917725 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:46:58.917733 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:46:58.917789 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:46:58.917797 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:46:58.917804 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:46:58.917812 | orchestrator | 2025-09-11 00:46:58.917820 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-09-11 00:46:58.917828 | orchestrator | Thursday 11 September 2025 00:46:02 +0000 (0:00:36.830) 0:01:27.987 **** 2025-09-11 00:46:58.917835 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:46:58.917843 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:46:58.917851 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:46:58.917859 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:46:58.917866 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:46:58.917874 | orchestrator | changed: [testbed-manager] 2025-09-11 00:46:58.917882 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:46:58.917890 | orchestrator | 2025-09-11 00:46:58.917897 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-09-11 00:46:58.917905 | orchestrator | Thursday 11 September 2025 00:46:45 +0000 (0:00:43.543) 0:02:11.531 **** 2025-09-11 00:46:58.917913 | orchestrator | ok: [testbed-manager] 2025-09-11 00:46:58.917921 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:46:58.917928 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:46:58.917936 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:46:58.917942 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:46:58.917949 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:46:58.917955 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:46:58.917961 | orchestrator | 2025-09-11 00:46:58.917968 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-09-11 00:46:58.917975 | orchestrator | Thursday 11 September 2025 00:46:48 +0000 (0:00:02.235) 0:02:13.766 **** 2025-09-11 00:46:58.917981 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:46:58.917988 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:46:58.917994 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:46:58.918001 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:46:58.918008 | orchestrator | changed: [testbed-manager] 2025-09-11 00:46:58.918058 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:46:58.918067 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:46:58.918073 | orchestrator | 2025-09-11 00:46:58.918080 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:46:58.918088 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-11 00:46:58.918095 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-11 00:46:58.918108 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-11 00:46:58.918115 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-11 00:46:58.918127 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-11 00:46:58.918134 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-11 00:46:58.918141 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-11 00:46:58.918147 | orchestrator | 2025-09-11 00:46:58.918154 | orchestrator | 2025-09-11 00:46:58.918160 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:46:58.918172 | orchestrator | Thursday 11 September 2025 00:46:57 +0000 (0:00:09.266) 0:02:23.033 **** 2025-09-11 00:46:58.918179 | orchestrator | =============================================================================== 2025-09-11 00:46:58.918185 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 43.54s 2025-09-11 00:46:58.918192 | orchestrator | common : Restart fluentd container ------------------------------------- 36.83s 2025-09-11 00:46:58.918199 | orchestrator | common : Restart cron container ----------------------------------------- 9.27s 2025-09-11 00:46:58.918205 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.61s 2025-09-11 00:46:58.918211 | orchestrator | common : Copying over config.json files for services -------------------- 5.26s 2025-09-11 00:46:58.918218 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.96s 2025-09-11 00:46:58.918225 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.78s 2025-09-11 00:46:58.918231 | orchestrator | common : Check common containers ---------------------------------------- 3.53s 2025-09-11 00:46:58.918238 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.43s 2025-09-11 00:46:58.918244 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.98s 2025-09-11 00:46:58.918311 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.88s 2025-09-11 00:46:58.918368 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.43s 2025-09-11 00:46:58.918378 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.35s 2025-09-11 00:46:58.918384 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.24s 2025-09-11 00:46:58.918391 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.16s 2025-09-11 00:46:58.918397 | orchestrator | common : Creating log volume -------------------------------------------- 1.45s 2025-09-11 00:46:58.918404 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.36s 2025-09-11 00:46:58.918411 | orchestrator | common : include_tasks -------------------------------------------------- 1.23s 2025-09-11 00:46:58.918417 | orchestrator | common : Find custom fluentd format config files ------------------------ 1.18s 2025-09-11 00:46:58.918424 | orchestrator | common : include_tasks -------------------------------------------------- 1.18s 2025-09-11 00:46:58.918431 | orchestrator | 2025-09-11 00:46:58 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:46:58.918437 | orchestrator | 2025-09-11 00:46:58 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:46:58.918444 | orchestrator | 2025-09-11 00:46:58 | INFO  | Task a9b9d7b7-c255-4ec9-a0d5-9836a5ec518a is in state STARTED 2025-09-11 00:46:58.918451 | orchestrator | 2025-09-11 00:46:58 | INFO  | Task 47fa2b60-9e6d-483f-929c-dff5e40dd6c9 is in state STARTED 2025-09-11 00:46:58.918461 | orchestrator | 2025-09-11 00:46:58 | INFO  | Task 2abd687d-5dcb-4747-8263-4718f8ac6ccd is in state STARTED 2025-09-11 00:46:58.918469 | orchestrator | 2025-09-11 00:46:58 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:47:01.944361 | orchestrator | 2025-09-11 00:47:01 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:47:01.944554 | orchestrator | 2025-09-11 00:47:01 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:47:01.945307 | orchestrator | 2025-09-11 00:47:01 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:47:01.945857 | orchestrator | 2025-09-11 00:47:01 | INFO  | Task a9b9d7b7-c255-4ec9-a0d5-9836a5ec518a is in state STARTED 2025-09-11 00:47:01.946539 | orchestrator | 2025-09-11 00:47:01 | INFO  | Task 47fa2b60-9e6d-483f-929c-dff5e40dd6c9 is in state STARTED 2025-09-11 00:47:01.947273 | orchestrator | 2025-09-11 00:47:01 | INFO  | Task 2abd687d-5dcb-4747-8263-4718f8ac6ccd is in state STARTED 2025-09-11 00:47:01.947296 | orchestrator | 2025-09-11 00:47:01 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:47:04.975508 | orchestrator | 2025-09-11 00:47:04 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:47:04.975907 | orchestrator | 2025-09-11 00:47:04 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:47:04.977983 | orchestrator | 2025-09-11 00:47:04 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:47:04.980293 | orchestrator | 2025-09-11 00:47:04 | INFO  | Task a9b9d7b7-c255-4ec9-a0d5-9836a5ec518a is in state STARTED 2025-09-11 00:47:04.980793 | orchestrator | 2025-09-11 00:47:04 | INFO  | Task 47fa2b60-9e6d-483f-929c-dff5e40dd6c9 is in state STARTED 2025-09-11 00:47:04.981520 | orchestrator | 2025-09-11 00:47:04 | INFO  | Task 2abd687d-5dcb-4747-8263-4718f8ac6ccd is in state STARTED 2025-09-11 00:47:04.981552 | orchestrator | 2025-09-11 00:47:04 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:47:08.015078 | orchestrator | 2025-09-11 00:47:08 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:47:08.015382 | orchestrator | 2025-09-11 00:47:08 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:47:08.016393 | orchestrator | 2025-09-11 00:47:08 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:47:08.017066 | orchestrator | 2025-09-11 00:47:08 | INFO  | Task a9b9d7b7-c255-4ec9-a0d5-9836a5ec518a is in state STARTED 2025-09-11 00:47:08.017686 | orchestrator | 2025-09-11 00:47:08 | INFO  | Task 47fa2b60-9e6d-483f-929c-dff5e40dd6c9 is in state STARTED 2025-09-11 00:47:08.018400 | orchestrator | 2025-09-11 00:47:08 | INFO  | Task 2abd687d-5dcb-4747-8263-4718f8ac6ccd is in state STARTED 2025-09-11 00:47:08.018425 | orchestrator | 2025-09-11 00:47:08 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:47:11.045751 | orchestrator | 2025-09-11 00:47:11 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:47:11.045875 | orchestrator | 2025-09-11 00:47:11 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:47:11.045891 | orchestrator | 2025-09-11 00:47:11 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:47:11.045903 | orchestrator | 2025-09-11 00:47:11 | INFO  | Task a9b9d7b7-c255-4ec9-a0d5-9836a5ec518a is in state STARTED 2025-09-11 00:47:11.045913 | orchestrator | 2025-09-11 00:47:11 | INFO  | Task 47fa2b60-9e6d-483f-929c-dff5e40dd6c9 is in state STARTED 2025-09-11 00:47:11.045924 | orchestrator | 2025-09-11 00:47:11 | INFO  | Task 2abd687d-5dcb-4747-8263-4718f8ac6ccd is in state STARTED 2025-09-11 00:47:11.045935 | orchestrator | 2025-09-11 00:47:11 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:47:14.105341 | orchestrator | 2025-09-11 00:47:14 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:47:14.107585 | orchestrator | 2025-09-11 00:47:14 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:47:14.152291 | orchestrator | 2025-09-11 00:47:14 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:47:14.152337 | orchestrator | 2025-09-11 00:47:14 | INFO  | Task a9b9d7b7-c255-4ec9-a0d5-9836a5ec518a is in state STARTED 2025-09-11 00:47:14.152358 | orchestrator | 2025-09-11 00:47:14 | INFO  | Task 47fa2b60-9e6d-483f-929c-dff5e40dd6c9 is in state STARTED 2025-09-11 00:47:14.152378 | orchestrator | 2025-09-11 00:47:14 | INFO  | Task 2abd687d-5dcb-4747-8263-4718f8ac6ccd is in state STARTED 2025-09-11 00:47:14.152390 | orchestrator | 2025-09-11 00:47:14 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:47:17.166617 | orchestrator | 2025-09-11 00:47:17 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:47:17.166827 | orchestrator | 2025-09-11 00:47:17 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:47:17.167517 | orchestrator | 2025-09-11 00:47:17 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:47:17.167967 | orchestrator | 2025-09-11 00:47:17 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:47:17.171808 | orchestrator | 2025-09-11 00:47:17 | INFO  | Task a9b9d7b7-c255-4ec9-a0d5-9836a5ec518a is in state STARTED 2025-09-11 00:47:17.173367 | orchestrator | 2025-09-11 00:47:17 | INFO  | Task 47fa2b60-9e6d-483f-929c-dff5e40dd6c9 is in state STARTED 2025-09-11 00:47:17.173799 | orchestrator | 2025-09-11 00:47:17 | INFO  | Task 2abd687d-5dcb-4747-8263-4718f8ac6ccd is in state SUCCESS 2025-09-11 00:47:17.173906 | orchestrator | 2025-09-11 00:47:17 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:47:20.231937 | orchestrator | 2025-09-11 00:47:20 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:47:20.232132 | orchestrator | 2025-09-11 00:47:20 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:47:20.232668 | orchestrator | 2025-09-11 00:47:20 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:47:20.233521 | orchestrator | 2025-09-11 00:47:20 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:47:20.233998 | orchestrator | 2025-09-11 00:47:20 | INFO  | Task a9b9d7b7-c255-4ec9-a0d5-9836a5ec518a is in state STARTED 2025-09-11 00:47:20.234799 | orchestrator | 2025-09-11 00:47:20 | INFO  | Task 47fa2b60-9e6d-483f-929c-dff5e40dd6c9 is in state STARTED 2025-09-11 00:47:20.235081 | orchestrator | 2025-09-11 00:47:20 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:47:23.265360 | orchestrator | 2025-09-11 00:47:23 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:47:23.266996 | orchestrator | 2025-09-11 00:47:23 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:47:23.268509 | orchestrator | 2025-09-11 00:47:23 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:47:23.270546 | orchestrator | 2025-09-11 00:47:23 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:47:23.270722 | orchestrator | 2025-09-11 00:47:23 | INFO  | Task a9b9d7b7-c255-4ec9-a0d5-9836a5ec518a is in state STARTED 2025-09-11 00:47:23.271381 | orchestrator | 2025-09-11 00:47:23 | INFO  | Task 47fa2b60-9e6d-483f-929c-dff5e40dd6c9 is in state STARTED 2025-09-11 00:47:23.271433 | orchestrator | 2025-09-11 00:47:23 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:47:26.311447 | orchestrator | 2025-09-11 00:47:26 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:47:26.311559 | orchestrator | 2025-09-11 00:47:26 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:47:26.312380 | orchestrator | 2025-09-11 00:47:26 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:47:26.313268 | orchestrator | 2025-09-11 00:47:26 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:47:26.315515 | orchestrator | 2025-09-11 00:47:26 | INFO  | Task a9b9d7b7-c255-4ec9-a0d5-9836a5ec518a is in state STARTED 2025-09-11 00:47:26.315550 | orchestrator | 2025-09-11 00:47:26 | INFO  | Task 47fa2b60-9e6d-483f-929c-dff5e40dd6c9 is in state STARTED 2025-09-11 00:47:26.315562 | orchestrator | 2025-09-11 00:47:26 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:47:29.357964 | orchestrator | 2025-09-11 00:47:29 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:47:29.358718 | orchestrator | 2025-09-11 00:47:29 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:47:29.362381 | orchestrator | 2025-09-11 00:47:29 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:47:29.364312 | orchestrator | 2025-09-11 00:47:29 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:47:29.369186 | orchestrator | 2025-09-11 00:47:29 | INFO  | Task a9b9d7b7-c255-4ec9-a0d5-9836a5ec518a is in state STARTED 2025-09-11 00:47:29.369570 | orchestrator | 2025-09-11 00:47:29 | INFO  | Task 47fa2b60-9e6d-483f-929c-dff5e40dd6c9 is in state SUCCESS 2025-09-11 00:47:29.371397 | orchestrator | 2025-09-11 00:47:29.371431 | orchestrator | 2025-09-11 00:47:29.371444 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-11 00:47:29.371455 | orchestrator | 2025-09-11 00:47:29.371466 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-11 00:47:29.371477 | orchestrator | Thursday 11 September 2025 00:47:01 +0000 (0:00:00.235) 0:00:00.236 **** 2025-09-11 00:47:29.371488 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:47:29.371500 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:47:29.371511 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:47:29.371522 | orchestrator | 2025-09-11 00:47:29.371533 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-11 00:47:29.371544 | orchestrator | Thursday 11 September 2025 00:47:02 +0000 (0:00:00.291) 0:00:00.527 **** 2025-09-11 00:47:29.371555 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-09-11 00:47:29.371566 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-09-11 00:47:29.371577 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-09-11 00:47:29.371587 | orchestrator | 2025-09-11 00:47:29.371598 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-09-11 00:47:29.371609 | orchestrator | 2025-09-11 00:47:29.371620 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-09-11 00:47:29.371631 | orchestrator | Thursday 11 September 2025 00:47:02 +0000 (0:00:00.324) 0:00:00.851 **** 2025-09-11 00:47:29.371641 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:47:29.371652 | orchestrator | 2025-09-11 00:47:29.371663 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-09-11 00:47:29.371674 | orchestrator | Thursday 11 September 2025 00:47:02 +0000 (0:00:00.459) 0:00:01.311 **** 2025-09-11 00:47:29.371685 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-11 00:47:29.371695 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-11 00:47:29.371725 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-11 00:47:29.371737 | orchestrator | 2025-09-11 00:47:29.371747 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-09-11 00:47:29.371758 | orchestrator | Thursday 11 September 2025 00:47:03 +0000 (0:00:00.793) 0:00:02.105 **** 2025-09-11 00:47:29.371768 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-11 00:47:29.371779 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-11 00:47:29.371819 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-11 00:47:29.371831 | orchestrator | 2025-09-11 00:47:29.371842 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-09-11 00:47:29.371852 | orchestrator | Thursday 11 September 2025 00:47:05 +0000 (0:00:01.777) 0:00:03.882 **** 2025-09-11 00:47:29.371863 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:47:29.371874 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:47:29.371884 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:47:29.371895 | orchestrator | 2025-09-11 00:47:29.371905 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-09-11 00:47:29.371916 | orchestrator | Thursday 11 September 2025 00:47:07 +0000 (0:00:01.672) 0:00:05.554 **** 2025-09-11 00:47:29.371926 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:47:29.371937 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:47:29.371947 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:47:29.371957 | orchestrator | 2025-09-11 00:47:29.371968 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:47:29.371979 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:47:29.371991 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:47:29.372001 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:47:29.372012 | orchestrator | 2025-09-11 00:47:29.372022 | orchestrator | 2025-09-11 00:47:29.372033 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:47:29.372044 | orchestrator | Thursday 11 September 2025 00:47:14 +0000 (0:00:06.970) 0:00:12.524 **** 2025-09-11 00:47:29.372055 | orchestrator | =============================================================================== 2025-09-11 00:47:29.372065 | orchestrator | memcached : Restart memcached container --------------------------------- 6.97s 2025-09-11 00:47:29.372076 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.78s 2025-09-11 00:47:29.372086 | orchestrator | memcached : Check memcached container ----------------------------------- 1.67s 2025-09-11 00:47:29.372097 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.79s 2025-09-11 00:47:29.372107 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.46s 2025-09-11 00:47:29.372118 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.32s 2025-09-11 00:47:29.372129 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2025-09-11 00:47:29.372139 | orchestrator | 2025-09-11 00:47:29.372150 | orchestrator | 2025-09-11 00:47:29.372160 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-11 00:47:29.372171 | orchestrator | 2025-09-11 00:47:29.372182 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-11 00:47:29.372197 | orchestrator | Thursday 11 September 2025 00:47:02 +0000 (0:00:00.208) 0:00:00.208 **** 2025-09-11 00:47:29.372216 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:47:29.372237 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:47:29.372256 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:47:29.372274 | orchestrator | 2025-09-11 00:47:29.372298 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-11 00:47:29.372354 | orchestrator | Thursday 11 September 2025 00:47:02 +0000 (0:00:00.275) 0:00:00.484 **** 2025-09-11 00:47:29.372370 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-09-11 00:47:29.372381 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-09-11 00:47:29.372392 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-09-11 00:47:29.372403 | orchestrator | 2025-09-11 00:47:29.372414 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-09-11 00:47:29.372424 | orchestrator | 2025-09-11 00:47:29.372435 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-09-11 00:47:29.372446 | orchestrator | Thursday 11 September 2025 00:47:02 +0000 (0:00:00.353) 0:00:00.838 **** 2025-09-11 00:47:29.372456 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:47:29.372467 | orchestrator | 2025-09-11 00:47:29.372478 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-09-11 00:47:29.372489 | orchestrator | Thursday 11 September 2025 00:47:03 +0000 (0:00:00.513) 0:00:01.351 **** 2025-09-11 00:47:29.372502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-11 00:47:29.372525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-11 00:47:29.372537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-11 00:47:29.372549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-11 00:47:29.372561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-11 00:47:29.372586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-11 00:47:29.372598 | orchestrator | 2025-09-11 00:47:29.372609 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-09-11 00:47:29.372620 | orchestrator | Thursday 11 September 2025 00:47:04 +0000 (0:00:01.316) 0:00:02.668 **** 2025-09-11 00:47:29.372631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-11 00:47:29.372643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-11 00:47:29.372659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-11 00:47:29.372671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-11 00:47:29.372682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-11 00:47:29.372706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-11 00:47:29.372717 | orchestrator | 2025-09-11 00:47:29.372728 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-09-11 00:47:29.372739 | orchestrator | Thursday 11 September 2025 00:47:07 +0000 (0:00:03.052) 0:00:05.720 **** 2025-09-11 00:47:29.372751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-11 00:47:29.372762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-11 00:47:29.372777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-11 00:47:29.372812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-11 00:47:29.372829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-11 00:47:29.372846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-11 00:47:29.372858 | orchestrator | 2025-09-11 00:47:29.372875 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-09-11 00:47:29.372886 | orchestrator | Thursday 11 September 2025 00:47:10 +0000 (0:00:02.561) 0:00:08.282 **** 2025-09-11 00:47:29.372898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-11 00:47:29.372915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-11 00:47:29.372940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-11 00:47:29.372960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-11 00:47:29.372978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-11 00:47:29.373007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-11 00:47:29.373028 | orchestrator | 2025-09-11 00:47:29.373046 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-11 00:47:29.373061 | orchestrator | Thursday 11 September 2025 00:47:11 +0000 (0:00:01.821) 0:00:10.103 **** 2025-09-11 00:47:29.373071 | orchestrator | 2025-09-11 00:47:29.373082 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-11 00:47:29.373100 | orchestrator | Thursday 11 September 2025 00:47:12 +0000 (0:00:00.196) 0:00:10.300 **** 2025-09-11 00:47:29.373112 | orchestrator | 2025-09-11 00:47:29.373122 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-11 00:47:29.373133 | orchestrator | Thursday 11 September 2025 00:47:12 +0000 (0:00:00.140) 0:00:10.441 **** 2025-09-11 00:47:29.373143 | orchestrator | 2025-09-11 00:47:29.373154 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-09-11 00:47:29.373165 | orchestrator | Thursday 11 September 2025 00:47:12 +0000 (0:00:00.248) 0:00:10.689 **** 2025-09-11 00:47:29.373176 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:47:29.373186 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:47:29.373197 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:47:29.373208 | orchestrator | 2025-09-11 00:47:29.373219 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-09-11 00:47:29.373229 | orchestrator | Thursday 11 September 2025 00:47:20 +0000 (0:00:07.621) 0:00:18.310 **** 2025-09-11 00:47:29.373240 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:47:29.373251 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:47:29.373261 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:47:29.373272 | orchestrator | 2025-09-11 00:47:29.373283 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:47:29.373294 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:47:29.373305 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:47:29.373316 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:47:29.373326 | orchestrator | 2025-09-11 00:47:29.373337 | orchestrator | 2025-09-11 00:47:29.373348 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:47:29.373361 | orchestrator | Thursday 11 September 2025 00:47:27 +0000 (0:00:07.414) 0:00:25.724 **** 2025-09-11 00:47:29.373380 | orchestrator | =============================================================================== 2025-09-11 00:47:29.373398 | orchestrator | redis : Restart redis container ----------------------------------------- 7.62s 2025-09-11 00:47:29.373423 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 7.41s 2025-09-11 00:47:29.373441 | orchestrator | redis : Copying over default config.json files -------------------------- 3.05s 2025-09-11 00:47:29.373470 | orchestrator | redis : Copying over redis config files --------------------------------- 2.56s 2025-09-11 00:47:29.373489 | orchestrator | redis : Check redis containers ------------------------------------------ 1.82s 2025-09-11 00:47:29.373507 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.32s 2025-09-11 00:47:29.373525 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.59s 2025-09-11 00:47:29.373543 | orchestrator | redis : include_tasks --------------------------------------------------- 0.51s 2025-09-11 00:47:29.373562 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.35s 2025-09-11 00:47:29.373581 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2025-09-11 00:47:29.373600 | orchestrator | 2025-09-11 00:47:29 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:47:32.425260 | orchestrator | 2025-09-11 00:47:32 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:47:32.425355 | orchestrator | 2025-09-11 00:47:32 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:47:32.425927 | orchestrator | 2025-09-11 00:47:32 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:47:32.426581 | orchestrator | 2025-09-11 00:47:32 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:47:32.427306 | orchestrator | 2025-09-11 00:47:32 | INFO  | Task a9b9d7b7-c255-4ec9-a0d5-9836a5ec518a is in state STARTED 2025-09-11 00:47:32.428006 | orchestrator | 2025-09-11 00:47:32 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:47:35.466253 | orchestrator | 2025-09-11 00:47:35 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:47:35.467438 | orchestrator | 2025-09-11 00:47:35 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:47:35.468037 | orchestrator | 2025-09-11 00:47:35 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:47:35.468471 | orchestrator | 2025-09-11 00:47:35 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:47:35.469130 | orchestrator | 2025-09-11 00:47:35 | INFO  | Task a9b9d7b7-c255-4ec9-a0d5-9836a5ec518a is in state STARTED 2025-09-11 00:47:35.469151 | orchestrator | 2025-09-11 00:47:35 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:47:38.497394 | orchestrator | 2025-09-11 00:47:38 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:47:38.497481 | orchestrator | 2025-09-11 00:47:38 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:47:38.497496 | orchestrator | 2025-09-11 00:47:38 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:47:38.497509 | orchestrator | 2025-09-11 00:47:38 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:47:38.497521 | orchestrator | 2025-09-11 00:47:38 | INFO  | Task a9b9d7b7-c255-4ec9-a0d5-9836a5ec518a is in state STARTED 2025-09-11 00:47:38.497532 | orchestrator | 2025-09-11 00:47:38 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:47:41.572691 | orchestrator | 2025-09-11 00:47:41 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:47:41.572781 | orchestrator | 2025-09-11 00:47:41 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:47:41.572796 | orchestrator | 2025-09-11 00:47:41 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:47:41.572849 | orchestrator | 2025-09-11 00:47:41 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:47:41.572887 | orchestrator | 2025-09-11 00:47:41 | INFO  | Task a9b9d7b7-c255-4ec9-a0d5-9836a5ec518a is in state STARTED 2025-09-11 00:47:41.572898 | orchestrator | 2025-09-11 00:47:41 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:47:44.589024 | orchestrator | 2025-09-11 00:47:44 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:47:44.589689 | orchestrator | 2025-09-11 00:47:44 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:47:44.590337 | orchestrator | 2025-09-11 00:47:44 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:47:44.592053 | orchestrator | 2025-09-11 00:47:44 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:47:44.592655 | orchestrator | 2025-09-11 00:47:44 | INFO  | Task a9b9d7b7-c255-4ec9-a0d5-9836a5ec518a is in state STARTED 2025-09-11 00:47:44.592688 | orchestrator | 2025-09-11 00:47:44 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:47:47.634701 | orchestrator | 2025-09-11 00:47:47 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:47:47.636605 | orchestrator | 2025-09-11 00:47:47 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:47:47.639058 | orchestrator | 2025-09-11 00:47:47 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:47:47.640453 | orchestrator | 2025-09-11 00:47:47 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:47:47.641848 | orchestrator | 2025-09-11 00:47:47 | INFO  | Task a9b9d7b7-c255-4ec9-a0d5-9836a5ec518a is in state STARTED 2025-09-11 00:47:47.642883 | orchestrator | 2025-09-11 00:47:47 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:47:50.692443 | orchestrator | 2025-09-11 00:47:50 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:47:50.693043 | orchestrator | 2025-09-11 00:47:50 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:47:50.695889 | orchestrator | 2025-09-11 00:47:50 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:47:50.696990 | orchestrator | 2025-09-11 00:47:50 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:47:50.697607 | orchestrator | 2025-09-11 00:47:50 | INFO  | Task a9b9d7b7-c255-4ec9-a0d5-9836a5ec518a is in state STARTED 2025-09-11 00:47:50.697701 | orchestrator | 2025-09-11 00:47:50 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:47:53.720302 | orchestrator | 2025-09-11 00:47:53 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:47:53.722676 | orchestrator | 2025-09-11 00:47:53 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:47:53.723885 | orchestrator | 2025-09-11 00:47:53 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:47:53.725012 | orchestrator | 2025-09-11 00:47:53 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:47:53.725694 | orchestrator | 2025-09-11 00:47:53 | INFO  | Task a9b9d7b7-c255-4ec9-a0d5-9836a5ec518a is in state STARTED 2025-09-11 00:47:53.726616 | orchestrator | 2025-09-11 00:47:53 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:47:56.780034 | orchestrator | 2025-09-11 00:47:56 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:47:56.782718 | orchestrator | 2025-09-11 00:47:56 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:47:56.783431 | orchestrator | 2025-09-11 00:47:56 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:47:56.784082 | orchestrator | 2025-09-11 00:47:56 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:47:56.786651 | orchestrator | 2025-09-11 00:47:56 | INFO  | Task a9b9d7b7-c255-4ec9-a0d5-9836a5ec518a is in state STARTED 2025-09-11 00:47:56.787160 | orchestrator | 2025-09-11 00:47:56 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:47:59.833482 | orchestrator | 2025-09-11 00:47:59 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:47:59.836909 | orchestrator | 2025-09-11 00:47:59 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:47:59.837350 | orchestrator | 2025-09-11 00:47:59 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:47:59.837915 | orchestrator | 2025-09-11 00:47:59 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:47:59.838534 | orchestrator | 2025-09-11 00:47:59 | INFO  | Task a9b9d7b7-c255-4ec9-a0d5-9836a5ec518a is in state STARTED 2025-09-11 00:47:59.838559 | orchestrator | 2025-09-11 00:47:59 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:48:02.869180 | orchestrator | 2025-09-11 00:48:02 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:48:02.869690 | orchestrator | 2025-09-11 00:48:02 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:48:02.870877 | orchestrator | 2025-09-11 00:48:02 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:48:02.871337 | orchestrator | 2025-09-11 00:48:02 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:48:02.872181 | orchestrator | 2025-09-11 00:48:02 | INFO  | Task a9b9d7b7-c255-4ec9-a0d5-9836a5ec518a is in state STARTED 2025-09-11 00:48:02.872205 | orchestrator | 2025-09-11 00:48:02 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:48:05.902998 | orchestrator | 2025-09-11 00:48:05 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:48:05.903107 | orchestrator | 2025-09-11 00:48:05 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:48:05.903537 | orchestrator | 2025-09-11 00:48:05 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:48:05.904081 | orchestrator | 2025-09-11 00:48:05 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state STARTED 2025-09-11 00:48:05.905590 | orchestrator | 2025-09-11 00:48:05 | INFO  | Task a9b9d7b7-c255-4ec9-a0d5-9836a5ec518a is in state SUCCESS 2025-09-11 00:48:05.907328 | orchestrator | 2025-09-11 00:48:05.907367 | orchestrator | 2025-09-11 00:48:05.907380 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-11 00:48:05.907391 | orchestrator | 2025-09-11 00:48:05.907403 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-11 00:48:05.907414 | orchestrator | Thursday 11 September 2025 00:47:01 +0000 (0:00:00.259) 0:00:00.259 **** 2025-09-11 00:48:05.907425 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:48:05.907436 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:48:05.907447 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:48:05.907458 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:48:05.907469 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:48:05.907479 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:48:05.907490 | orchestrator | 2025-09-11 00:48:05.907501 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-11 00:48:05.907512 | orchestrator | Thursday 11 September 2025 00:47:02 +0000 (0:00:00.569) 0:00:00.829 **** 2025-09-11 00:48:05.907542 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-11 00:48:05.907554 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-11 00:48:05.907564 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-11 00:48:05.907575 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-11 00:48:05.907585 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-11 00:48:05.907596 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-11 00:48:05.907607 | orchestrator | 2025-09-11 00:48:05.907617 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-09-11 00:48:05.907628 | orchestrator | 2025-09-11 00:48:05.907638 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-09-11 00:48:05.907649 | orchestrator | Thursday 11 September 2025 00:47:02 +0000 (0:00:00.499) 0:00:01.328 **** 2025-09-11 00:48:05.907660 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:48:05.907672 | orchestrator | 2025-09-11 00:48:05.907683 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-11 00:48:05.907694 | orchestrator | Thursday 11 September 2025 00:47:04 +0000 (0:00:01.508) 0:00:02.837 **** 2025-09-11 00:48:05.907705 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-11 00:48:05.907716 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-11 00:48:05.907726 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-11 00:48:05.907737 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-11 00:48:05.907748 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-11 00:48:05.907758 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-11 00:48:05.907769 | orchestrator | 2025-09-11 00:48:05.907779 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-11 00:48:05.907790 | orchestrator | Thursday 11 September 2025 00:47:06 +0000 (0:00:01.674) 0:00:04.511 **** 2025-09-11 00:48:05.907801 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-11 00:48:05.907811 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-11 00:48:05.907822 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-11 00:48:05.907833 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-11 00:48:05.907843 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-11 00:48:05.907873 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-11 00:48:05.907884 | orchestrator | 2025-09-11 00:48:05.907895 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-11 00:48:05.907907 | orchestrator | Thursday 11 September 2025 00:47:07 +0000 (0:00:01.463) 0:00:05.974 **** 2025-09-11 00:48:05.907921 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-09-11 00:48:05.907933 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-09-11 00:48:05.907946 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:05.907959 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-09-11 00:48:05.907971 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:48:05.907984 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-09-11 00:48:05.907996 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:48:05.908008 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-09-11 00:48:05.908021 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:48:05.908034 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:48:05.908052 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-09-11 00:48:05.908065 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:48:05.908077 | orchestrator | 2025-09-11 00:48:05.908090 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-09-11 00:48:05.908110 | orchestrator | Thursday 11 September 2025 00:47:08 +0000 (0:00:00.918) 0:00:06.893 **** 2025-09-11 00:48:05.908123 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:05.908135 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:48:05.908148 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:48:05.908160 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:48:05.908173 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:48:05.908185 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:48:05.908197 | orchestrator | 2025-09-11 00:48:05.908210 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-09-11 00:48:05.908222 | orchestrator | Thursday 11 September 2025 00:47:09 +0000 (0:00:00.917) 0:00:07.810 **** 2025-09-11 00:48:05.908252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-11 00:48:05.908269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-11 00:48:05.908280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-11 00:48:05.908292 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-11 00:48:05.908307 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-11 00:48:05.908325 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-11 00:48:05.908344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-11 00:48:05.908356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-11 00:48:05.908367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-11 00:48:05.908379 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-11 00:48:05.908394 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-11 00:48:05.908417 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-11 00:48:05.908429 | orchestrator | 2025-09-11 00:48:05.908441 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-09-11 00:48:05.908452 | orchestrator | Thursday 11 September 2025 00:47:11 +0000 (0:00:01.627) 0:00:09.438 **** 2025-09-11 00:48:05.908463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-11 00:48:05.908475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-11 00:48:05.908486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-11 00:48:05.908497 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-11 00:48:05.908517 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-11 00:48:05.908544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-11 00:48:05.908556 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-11 00:48:05.908567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-11 00:48:05.908578 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-11 00:48:05.908595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-11 00:48:05.908611 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-11 00:48:05.908629 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-11 00:48:05.908641 | orchestrator | 2025-09-11 00:48:05.908652 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-09-11 00:48:05.908663 | orchestrator | Thursday 11 September 2025 00:47:14 +0000 (0:00:03.319) 0:00:12.758 **** 2025-09-11 00:48:05.908674 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:05.908685 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:48:05.908695 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:48:05.908706 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:48:05.908716 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:48:05.908727 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:48:05.908737 | orchestrator | 2025-09-11 00:48:05.908748 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-09-11 00:48:05.908759 | orchestrator | Thursday 11 September 2025 00:47:15 +0000 (0:00:01.533) 0:00:14.291 **** 2025-09-11 00:48:05.908770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-11 00:48:05.908782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-11 00:48:05.908799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-11 00:48:05.908814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-11 00:48:05.908832 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-11 00:48:05.908844 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-11 00:48:05.908884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-11 00:48:05.908902 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-11 00:48:05.908913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-11 00:48:05.908933 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-11 00:48:05.908951 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-11 00:48:05.908963 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-11 00:48:05.908974 | orchestrator | 2025-09-11 00:48:05.908984 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-11 00:48:05.908995 | orchestrator | Thursday 11 September 2025 00:47:18 +0000 (0:00:02.586) 0:00:16.877 **** 2025-09-11 00:48:05.909006 | orchestrator | 2025-09-11 00:48:05.909017 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-11 00:48:05.909033 | orchestrator | Thursday 11 September 2025 00:47:18 +0000 (0:00:00.298) 0:00:17.175 **** 2025-09-11 00:48:05.909044 | orchestrator | 2025-09-11 00:48:05.909055 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-11 00:48:05.909065 | orchestrator | Thursday 11 September 2025 00:47:19 +0000 (0:00:00.239) 0:00:17.415 **** 2025-09-11 00:48:05.909076 | orchestrator | 2025-09-11 00:48:05.909086 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-11 00:48:05.909097 | orchestrator | Thursday 11 September 2025 00:47:19 +0000 (0:00:00.300) 0:00:17.715 **** 2025-09-11 00:48:05.909107 | orchestrator | 2025-09-11 00:48:05.909118 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-11 00:48:05.909128 | orchestrator | Thursday 11 September 2025 00:47:19 +0000 (0:00:00.289) 0:00:18.007 **** 2025-09-11 00:48:05.909139 | orchestrator | 2025-09-11 00:48:05.909150 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-11 00:48:05.909160 | orchestrator | Thursday 11 September 2025 00:47:19 +0000 (0:00:00.149) 0:00:18.157 **** 2025-09-11 00:48:05.909171 | orchestrator | 2025-09-11 00:48:05.909181 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-09-11 00:48:05.909192 | orchestrator | Thursday 11 September 2025 00:47:19 +0000 (0:00:00.134) 0:00:18.291 **** 2025-09-11 00:48:05.909203 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:48:05.909213 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:48:05.909224 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:48:05.909235 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:48:05.909245 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:48:05.909256 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:48:05.909266 | orchestrator | 2025-09-11 00:48:05.909277 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-09-11 00:48:05.909288 | orchestrator | Thursday 11 September 2025 00:47:31 +0000 (0:00:11.081) 0:00:29.373 **** 2025-09-11 00:48:05.909298 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:48:05.909309 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:48:05.909320 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:48:05.909330 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:48:05.909341 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:48:05.909351 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:48:05.909362 | orchestrator | 2025-09-11 00:48:05.909372 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-11 00:48:05.909383 | orchestrator | Thursday 11 September 2025 00:47:32 +0000 (0:00:01.863) 0:00:31.237 **** 2025-09-11 00:48:05.909394 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:48:05.909404 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:48:05.909415 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:48:05.909425 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:48:05.909436 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:48:05.909450 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:48:05.909461 | orchestrator | 2025-09-11 00:48:05.909472 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-09-11 00:48:05.909482 | orchestrator | Thursday 11 September 2025 00:47:41 +0000 (0:00:08.470) 0:00:39.707 **** 2025-09-11 00:48:05.909493 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-09-11 00:48:05.909504 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-09-11 00:48:05.909515 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-09-11 00:48:05.909526 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-09-11 00:48:05.909537 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-09-11 00:48:05.909553 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-09-11 00:48:05.909570 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-09-11 00:48:05.909581 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-09-11 00:48:05.909592 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-09-11 00:48:05.909602 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-09-11 00:48:05.909612 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-09-11 00:48:05.909623 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-09-11 00:48:05.909634 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-11 00:48:05.909644 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-11 00:48:05.909655 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-11 00:48:05.909665 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-11 00:48:05.909676 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-11 00:48:05.909687 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-11 00:48:05.909697 | orchestrator | 2025-09-11 00:48:05.909708 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-09-11 00:48:05.909719 | orchestrator | Thursday 11 September 2025 00:47:48 +0000 (0:00:07.247) 0:00:46.954 **** 2025-09-11 00:48:05.909730 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-09-11 00:48:05.909740 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:48:05.909751 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-09-11 00:48:05.909761 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:48:05.909772 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-09-11 00:48:05.909782 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:48:05.909793 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-09-11 00:48:05.909804 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-09-11 00:48:05.909814 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-09-11 00:48:05.909825 | orchestrator | 2025-09-11 00:48:05.909835 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-09-11 00:48:05.909846 | orchestrator | Thursday 11 September 2025 00:47:50 +0000 (0:00:02.318) 0:00:49.273 **** 2025-09-11 00:48:05.909906 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-09-11 00:48:05.909917 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:48:05.909928 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-09-11 00:48:05.909939 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:48:05.909949 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-09-11 00:48:05.909960 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:48:05.909971 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-09-11 00:48:05.909981 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-09-11 00:48:05.909992 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-09-11 00:48:05.910002 | orchestrator | 2025-09-11 00:48:05.910013 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-11 00:48:05.910077 | orchestrator | Thursday 11 September 2025 00:47:54 +0000 (0:00:03.577) 0:00:52.850 **** 2025-09-11 00:48:05.910094 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:48:05.910103 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:48:05.910113 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:48:05.910122 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:48:05.910132 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:48:05.910141 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:48:05.910150 | orchestrator | 2025-09-11 00:48:05.910165 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:48:05.910175 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-11 00:48:05.910185 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-11 00:48:05.910195 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-11 00:48:05.910204 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-11 00:48:05.910214 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-11 00:48:05.910230 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-11 00:48:05.910240 | orchestrator | 2025-09-11 00:48:05.910250 | orchestrator | 2025-09-11 00:48:05.910259 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:48:05.910269 | orchestrator | Thursday 11 September 2025 00:48:04 +0000 (0:00:10.487) 0:01:03.338 **** 2025-09-11 00:48:05.910278 | orchestrator | =============================================================================== 2025-09-11 00:48:05.910288 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.96s 2025-09-11 00:48:05.910297 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.08s 2025-09-11 00:48:05.910307 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.25s 2025-09-11 00:48:05.910316 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.58s 2025-09-11 00:48:05.910325 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.32s 2025-09-11 00:48:05.910335 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.59s 2025-09-11 00:48:05.910344 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.32s 2025-09-11 00:48:05.910353 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.86s 2025-09-11 00:48:05.910363 | orchestrator | module-load : Load modules ---------------------------------------------- 1.67s 2025-09-11 00:48:05.910372 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.63s 2025-09-11 00:48:05.910381 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.53s 2025-09-11 00:48:05.910391 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.51s 2025-09-11 00:48:05.910400 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.46s 2025-09-11 00:48:05.910410 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.41s 2025-09-11 00:48:05.910419 | orchestrator | module-load : Drop module persistence ----------------------------------- 0.92s 2025-09-11 00:48:05.910429 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.92s 2025-09-11 00:48:05.910438 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.57s 2025-09-11 00:48:05.910447 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.50s 2025-09-11 00:48:05.910457 | orchestrator | 2025-09-11 00:48:05 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:48:08.971700 | orchestrator | 2025-09-11 00:48:08 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:48:08.971784 | orchestrator | 2025-09-11 00:48:08 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:48:08.971799 | orchestrator | 2025-09-11 00:48:08 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:48:08.971811 | orchestrator | 2025-09-11 00:48:08 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:48:08.971822 | orchestrator | 2025-09-11 00:48:08 | INFO  | Task cc346b06-190b-4f2c-9a93-2d6cf7fed879 is in state STARTED 2025-09-11 00:48:08.971833 | orchestrator | 2025-09-11 00:48:08 | INFO  | Task c3eec48f-145d-4d94-98a9-f074136b5645 is in state STARTED 2025-09-11 00:48:08.971843 | orchestrator | 2025-09-11 00:48:08 | INFO  | Task ad415479-4ee6-412a-bd25-209d3bfb0e07 is in state SUCCESS 2025-09-11 00:48:08.971915 | orchestrator | 2025-09-11 00:48:08 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:48:08.972759 | orchestrator | 2025-09-11 00:48:08.972947 | orchestrator | 2025-09-11 00:48:08.972968 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-09-11 00:48:08.972979 | orchestrator | 2025-09-11 00:48:08.972990 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-09-11 00:48:08.973002 | orchestrator | Thursday 11 September 2025 00:44:34 +0000 (0:00:00.171) 0:00:00.171 **** 2025-09-11 00:48:08.973013 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:48:08.973024 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:48:08.973035 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:48:08.973045 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:48:08.973056 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:48:08.973067 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:48:08.973077 | orchestrator | 2025-09-11 00:48:08.973088 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-09-11 00:48:08.973099 | orchestrator | Thursday 11 September 2025 00:44:35 +0000 (0:00:00.725) 0:00:00.896 **** 2025-09-11 00:48:08.973110 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:48:08.973121 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:48:08.973132 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:48:08.973143 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.973171 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:48:08.973182 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:48:08.973193 | orchestrator | 2025-09-11 00:48:08.973204 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-09-11 00:48:08.973215 | orchestrator | Thursday 11 September 2025 00:44:36 +0000 (0:00:00.506) 0:00:01.403 **** 2025-09-11 00:48:08.973225 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:48:08.973236 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:48:08.973247 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:48:08.973257 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.973268 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:48:08.973278 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:48:08.973289 | orchestrator | 2025-09-11 00:48:08.973300 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-09-11 00:48:08.973311 | orchestrator | Thursday 11 September 2025 00:44:36 +0000 (0:00:00.843) 0:00:02.247 **** 2025-09-11 00:48:08.973321 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:48:08.973332 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:48:08.973343 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:48:08.973353 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:48:08.973364 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:48:08.973375 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:48:08.973385 | orchestrator | 2025-09-11 00:48:08.973396 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-09-11 00:48:08.973425 | orchestrator | Thursday 11 September 2025 00:44:38 +0000 (0:00:01.944) 0:00:04.192 **** 2025-09-11 00:48:08.973436 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:48:08.973447 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:48:08.973457 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:48:08.973468 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:48:08.973479 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:48:08.973489 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:48:08.973500 | orchestrator | 2025-09-11 00:48:08.973511 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-09-11 00:48:08.973521 | orchestrator | Thursday 11 September 2025 00:44:39 +0000 (0:00:00.927) 0:00:05.119 **** 2025-09-11 00:48:08.973532 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:48:08.973543 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:48:08.973553 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:48:08.973564 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:48:08.973575 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:48:08.973588 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:48:08.973601 | orchestrator | 2025-09-11 00:48:08.973614 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-09-11 00:48:08.973626 | orchestrator | Thursday 11 September 2025 00:44:40 +0000 (0:00:01.157) 0:00:06.277 **** 2025-09-11 00:48:08.973639 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:48:08.973651 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:48:08.973664 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:48:08.973676 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.973689 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:48:08.973701 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:48:08.973713 | orchestrator | 2025-09-11 00:48:08.973726 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-09-11 00:48:08.973739 | orchestrator | Thursday 11 September 2025 00:44:41 +0000 (0:00:00.892) 0:00:07.169 **** 2025-09-11 00:48:08.973752 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:48:08.973765 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:48:08.973777 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:48:08.973790 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.973803 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:48:08.973815 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:48:08.973825 | orchestrator | 2025-09-11 00:48:08.973836 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-09-11 00:48:08.973847 | orchestrator | Thursday 11 September 2025 00:44:43 +0000 (0:00:01.381) 0:00:08.551 **** 2025-09-11 00:48:08.973894 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-11 00:48:08.973916 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-11 00:48:08.973934 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:48:08.973950 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-11 00:48:08.973961 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-11 00:48:08.973972 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:48:08.973983 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-11 00:48:08.973993 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-11 00:48:08.974004 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-11 00:48:08.974062 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-11 00:48:08.974087 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:48:08.974098 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-11 00:48:08.974109 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-11 00:48:08.974120 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.974143 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:48:08.974154 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-11 00:48:08.974171 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-11 00:48:08.974182 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:48:08.974193 | orchestrator | 2025-09-11 00:48:08.974203 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-09-11 00:48:08.974214 | orchestrator | Thursday 11 September 2025 00:44:43 +0000 (0:00:00.724) 0:00:09.275 **** 2025-09-11 00:48:08.974225 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:48:08.974235 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:48:08.974246 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:48:08.974256 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.974267 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:48:08.974278 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:48:08.974288 | orchestrator | 2025-09-11 00:48:08.974299 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-09-11 00:48:08.974311 | orchestrator | Thursday 11 September 2025 00:44:45 +0000 (0:00:01.405) 0:00:10.681 **** 2025-09-11 00:48:08.974322 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:48:08.974333 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:48:08.974343 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:48:08.974354 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:48:08.974364 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:48:08.974375 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:48:08.974385 | orchestrator | 2025-09-11 00:48:08.974396 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-09-11 00:48:08.974407 | orchestrator | Thursday 11 September 2025 00:44:46 +0000 (0:00:00.925) 0:00:11.607 **** 2025-09-11 00:48:08.974418 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:48:08.974428 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:48:08.974439 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:48:08.974449 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:48:08.974460 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:48:08.974471 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:48:08.974481 | orchestrator | 2025-09-11 00:48:08.974492 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-09-11 00:48:08.974503 | orchestrator | Thursday 11 September 2025 00:44:52 +0000 (0:00:06.001) 0:00:17.608 **** 2025-09-11 00:48:08.974513 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:48:08.974524 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:48:08.974535 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.974545 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:48:08.974556 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:48:08.974566 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:48:08.974577 | orchestrator | 2025-09-11 00:48:08.974588 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-09-11 00:48:08.974598 | orchestrator | Thursday 11 September 2025 00:44:53 +0000 (0:00:01.178) 0:00:18.786 **** 2025-09-11 00:48:08.974609 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:48:08.974620 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:48:08.974630 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:48:08.974641 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.974651 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:48:08.974662 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:48:08.974672 | orchestrator | 2025-09-11 00:48:08.974683 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-09-11 00:48:08.974695 | orchestrator | Thursday 11 September 2025 00:44:55 +0000 (0:00:01.701) 0:00:20.488 **** 2025-09-11 00:48:08.974706 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:48:08.974716 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:48:08.974727 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:48:08.974744 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:48:08.974754 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:48:08.974765 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:48:08.974776 | orchestrator | 2025-09-11 00:48:08.974786 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-09-11 00:48:08.974797 | orchestrator | Thursday 11 September 2025 00:44:55 +0000 (0:00:00.740) 0:00:21.229 **** 2025-09-11 00:48:08.974808 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-09-11 00:48:08.974819 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-09-11 00:48:08.974830 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-09-11 00:48:08.974840 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-09-11 00:48:08.974851 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-09-11 00:48:08.974908 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-09-11 00:48:08.974920 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-09-11 00:48:08.974940 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-09-11 00:48:08.974951 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-09-11 00:48:08.974962 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-09-11 00:48:08.974972 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-09-11 00:48:08.974983 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-09-11 00:48:08.974993 | orchestrator | 2025-09-11 00:48:08.975004 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-09-11 00:48:08.975015 | orchestrator | Thursday 11 September 2025 00:44:57 +0000 (0:00:01.762) 0:00:22.991 **** 2025-09-11 00:48:08.975026 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:48:08.975036 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:48:08.975047 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:48:08.975057 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:48:08.975068 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:48:08.975078 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:48:08.975089 | orchestrator | 2025-09-11 00:48:08.975109 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-09-11 00:48:08.975121 | orchestrator | 2025-09-11 00:48:08.975132 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-09-11 00:48:08.975142 | orchestrator | Thursday 11 September 2025 00:44:59 +0000 (0:00:02.139) 0:00:25.130 **** 2025-09-11 00:48:08.975153 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:48:08.975164 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:48:08.975175 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:48:08.975185 | orchestrator | 2025-09-11 00:48:08.975196 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-09-11 00:48:08.975212 | orchestrator | Thursday 11 September 2025 00:45:00 +0000 (0:00:01.035) 0:00:26.166 **** 2025-09-11 00:48:08.975223 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:48:08.975234 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:48:08.975244 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:48:08.975255 | orchestrator | 2025-09-11 00:48:08.975266 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-09-11 00:48:08.975276 | orchestrator | Thursday 11 September 2025 00:45:01 +0000 (0:00:01.141) 0:00:27.308 **** 2025-09-11 00:48:08.975287 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:48:08.975298 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:48:08.975308 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:48:08.975319 | orchestrator | 2025-09-11 00:48:08.975329 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-09-11 00:48:08.975340 | orchestrator | Thursday 11 September 2025 00:45:02 +0000 (0:00:00.947) 0:00:28.256 **** 2025-09-11 00:48:08.975351 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:48:08.975362 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:48:08.975372 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:48:08.975383 | orchestrator | 2025-09-11 00:48:08.975393 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-09-11 00:48:08.975411 | orchestrator | Thursday 11 September 2025 00:45:04 +0000 (0:00:01.166) 0:00:29.422 **** 2025-09-11 00:48:08.975422 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.975432 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:48:08.975443 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:48:08.975453 | orchestrator | 2025-09-11 00:48:08.975464 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-09-11 00:48:08.975475 | orchestrator | Thursday 11 September 2025 00:45:04 +0000 (0:00:00.553) 0:00:29.976 **** 2025-09-11 00:48:08.975486 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:48:08.975496 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:48:08.975507 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:48:08.975517 | orchestrator | 2025-09-11 00:48:08.975528 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-09-11 00:48:08.975539 | orchestrator | Thursday 11 September 2025 00:45:05 +0000 (0:00:00.781) 0:00:30.757 **** 2025-09-11 00:48:08.975549 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:48:08.975560 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:48:08.975571 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:48:08.975581 | orchestrator | 2025-09-11 00:48:08.975592 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-09-11 00:48:08.975603 | orchestrator | Thursday 11 September 2025 00:45:06 +0000 (0:00:01.285) 0:00:32.043 **** 2025-09-11 00:48:08.975614 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:48:08.975624 | orchestrator | 2025-09-11 00:48:08.975635 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-09-11 00:48:08.975646 | orchestrator | Thursday 11 September 2025 00:45:07 +0000 (0:00:00.801) 0:00:32.845 **** 2025-09-11 00:48:08.975656 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:48:08.975667 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:48:08.975678 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:48:08.975688 | orchestrator | 2025-09-11 00:48:08.975699 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-09-11 00:48:08.975710 | orchestrator | Thursday 11 September 2025 00:45:09 +0000 (0:00:01.764) 0:00:34.610 **** 2025-09-11 00:48:08.975720 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:48:08.975731 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:48:08.975742 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:48:08.975752 | orchestrator | 2025-09-11 00:48:08.975763 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-09-11 00:48:08.975774 | orchestrator | Thursday 11 September 2025 00:45:09 +0000 (0:00:00.653) 0:00:35.263 **** 2025-09-11 00:48:08.975784 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:48:08.975795 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:48:08.975805 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:48:08.975816 | orchestrator | 2025-09-11 00:48:08.975827 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-09-11 00:48:08.975837 | orchestrator | Thursday 11 September 2025 00:45:11 +0000 (0:00:01.176) 0:00:36.440 **** 2025-09-11 00:48:08.975848 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:48:08.975978 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:48:08.976006 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:48:08.976015 | orchestrator | 2025-09-11 00:48:08.976023 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-09-11 00:48:08.976031 | orchestrator | Thursday 11 September 2025 00:45:12 +0000 (0:00:01.245) 0:00:37.686 **** 2025-09-11 00:48:08.976039 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:48:08.976046 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.976054 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:48:08.976061 | orchestrator | 2025-09-11 00:48:08.976069 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-09-11 00:48:08.976077 | orchestrator | Thursday 11 September 2025 00:45:12 +0000 (0:00:00.315) 0:00:38.001 **** 2025-09-11 00:48:08.976093 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.976101 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:48:08.976108 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:48:08.976116 | orchestrator | 2025-09-11 00:48:08.976124 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-09-11 00:48:08.976132 | orchestrator | Thursday 11 September 2025 00:45:13 +0000 (0:00:00.568) 0:00:38.570 **** 2025-09-11 00:48:08.976142 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:48:08.976155 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:48:08.976164 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:48:08.976172 | orchestrator | 2025-09-11 00:48:08.976189 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-09-11 00:48:08.976197 | orchestrator | Thursday 11 September 2025 00:45:14 +0000 (0:00:01.752) 0:00:40.322 **** 2025-09-11 00:48:08.976206 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-11 00:48:08.976223 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-11 00:48:08.976232 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-11 00:48:08.976240 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-11 00:48:08.976248 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-11 00:48:08.976256 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-11 00:48:08.976264 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-11 00:48:08.976271 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-11 00:48:08.976279 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-11 00:48:08.976287 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-11 00:48:08.976295 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-11 00:48:08.976303 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-11 00:48:08.976311 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-11 00:48:08.976319 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-11 00:48:08.976326 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-11 00:48:08.976334 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:48:08.976342 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:48:08.976350 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:48:08.976358 | orchestrator | 2025-09-11 00:48:08.976366 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-09-11 00:48:08.976374 | orchestrator | Thursday 11 September 2025 00:46:10 +0000 (0:00:55.280) 0:01:35.602 **** 2025-09-11 00:48:08.976381 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.976389 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:48:08.976401 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:48:08.976409 | orchestrator | 2025-09-11 00:48:08.976417 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-09-11 00:48:08.976425 | orchestrator | Thursday 11 September 2025 00:46:10 +0000 (0:00:00.305) 0:01:35.907 **** 2025-09-11 00:48:08.976433 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:48:08.976441 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:48:08.976448 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:48:08.976456 | orchestrator | 2025-09-11 00:48:08.976464 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-09-11 00:48:08.976472 | orchestrator | Thursday 11 September 2025 00:46:11 +0000 (0:00:01.068) 0:01:36.975 **** 2025-09-11 00:48:08.976480 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:48:08.976487 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:48:08.976495 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:48:08.976503 | orchestrator | 2025-09-11 00:48:08.976511 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-09-11 00:48:08.976519 | orchestrator | Thursday 11 September 2025 00:46:12 +0000 (0:00:01.153) 0:01:38.129 **** 2025-09-11 00:48:08.976526 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:48:08.976534 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:48:08.976542 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:48:08.976550 | orchestrator | 2025-09-11 00:48:08.976557 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-09-11 00:48:08.976565 | orchestrator | Thursday 11 September 2025 00:46:38 +0000 (0:00:26.095) 0:02:04.224 **** 2025-09-11 00:48:08.976573 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:48:08.976581 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:48:08.976589 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:48:08.976596 | orchestrator | 2025-09-11 00:48:08.976604 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-09-11 00:48:08.976612 | orchestrator | Thursday 11 September 2025 00:46:39 +0000 (0:00:00.611) 0:02:04.835 **** 2025-09-11 00:48:08.976620 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:48:08.976627 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:48:08.976635 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:48:08.976643 | orchestrator | 2025-09-11 00:48:08.976655 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-09-11 00:48:08.976663 | orchestrator | Thursday 11 September 2025 00:46:40 +0000 (0:00:00.612) 0:02:05.447 **** 2025-09-11 00:48:08.976671 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:48:08.976679 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:48:08.976687 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:48:08.976694 | orchestrator | 2025-09-11 00:48:08.976702 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-09-11 00:48:08.976710 | orchestrator | Thursday 11 September 2025 00:46:40 +0000 (0:00:00.628) 0:02:06.076 **** 2025-09-11 00:48:08.976721 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:48:08.976729 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:48:08.976737 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:48:08.976745 | orchestrator | 2025-09-11 00:48:08.976753 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-09-11 00:48:08.976761 | orchestrator | Thursday 11 September 2025 00:46:41 +0000 (0:00:00.773) 0:02:06.849 **** 2025-09-11 00:48:08.976768 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:48:08.976776 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:48:08.976784 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:48:08.976792 | orchestrator | 2025-09-11 00:48:08.976799 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-09-11 00:48:08.976807 | orchestrator | Thursday 11 September 2025 00:46:41 +0000 (0:00:00.290) 0:02:07.139 **** 2025-09-11 00:48:08.976815 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:48:08.976823 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:48:08.976831 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:48:08.976839 | orchestrator | 2025-09-11 00:48:08.976851 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-09-11 00:48:08.976876 | orchestrator | Thursday 11 September 2025 00:46:42 +0000 (0:00:00.547) 0:02:07.687 **** 2025-09-11 00:48:08.976884 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:48:08.976892 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:48:08.976900 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:48:08.976908 | orchestrator | 2025-09-11 00:48:08.976915 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-09-11 00:48:08.976923 | orchestrator | Thursday 11 September 2025 00:46:42 +0000 (0:00:00.632) 0:02:08.319 **** 2025-09-11 00:48:08.976931 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:48:08.976939 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:48:08.976947 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:48:08.976955 | orchestrator | 2025-09-11 00:48:08.976962 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-09-11 00:48:08.976970 | orchestrator | Thursday 11 September 2025 00:46:43 +0000 (0:00:00.992) 0:02:09.312 **** 2025-09-11 00:48:08.976978 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:48:08.976986 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:48:08.976994 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:48:08.977002 | orchestrator | 2025-09-11 00:48:08.977009 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-09-11 00:48:08.977017 | orchestrator | Thursday 11 September 2025 00:46:44 +0000 (0:00:00.852) 0:02:10.164 **** 2025-09-11 00:48:08.977025 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.977033 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:48:08.977041 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:48:08.977048 | orchestrator | 2025-09-11 00:48:08.977056 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-09-11 00:48:08.977064 | orchestrator | Thursday 11 September 2025 00:46:45 +0000 (0:00:00.294) 0:02:10.459 **** 2025-09-11 00:48:08.977072 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.977080 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:48:08.977087 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:48:08.977095 | orchestrator | 2025-09-11 00:48:08.977103 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-09-11 00:48:08.977111 | orchestrator | Thursday 11 September 2025 00:46:45 +0000 (0:00:00.286) 0:02:10.746 **** 2025-09-11 00:48:08.977119 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:48:08.977127 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:48:08.977134 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:48:08.977142 | orchestrator | 2025-09-11 00:48:08.977150 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-09-11 00:48:08.977158 | orchestrator | Thursday 11 September 2025 00:46:46 +0000 (0:00:00.801) 0:02:11.547 **** 2025-09-11 00:48:08.977166 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:48:08.977174 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:48:08.977181 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:48:08.977189 | orchestrator | 2025-09-11 00:48:08.977197 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-09-11 00:48:08.977205 | orchestrator | Thursday 11 September 2025 00:46:46 +0000 (0:00:00.681) 0:02:12.229 **** 2025-09-11 00:48:08.977213 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-11 00:48:08.977221 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-11 00:48:08.977229 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-11 00:48:08.977237 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-11 00:48:08.977245 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-11 00:48:08.977253 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-11 00:48:08.977265 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-11 00:48:08.977273 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-11 00:48:08.977281 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-11 00:48:08.977293 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-09-11 00:48:08.977301 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-11 00:48:08.977309 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-11 00:48:08.977317 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-11 00:48:08.977328 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-09-11 00:48:08.977336 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-11 00:48:08.977344 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-11 00:48:08.977352 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-11 00:48:08.977360 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-11 00:48:08.977368 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-11 00:48:08.977376 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-11 00:48:08.977384 | orchestrator | 2025-09-11 00:48:08.977392 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-09-11 00:48:08.977400 | orchestrator | 2025-09-11 00:48:08.977407 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-09-11 00:48:08.977415 | orchestrator | Thursday 11 September 2025 00:46:50 +0000 (0:00:03.226) 0:02:15.455 **** 2025-09-11 00:48:08.977423 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:48:08.977431 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:48:08.977439 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:48:08.977447 | orchestrator | 2025-09-11 00:48:08.977455 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-09-11 00:48:08.977462 | orchestrator | Thursday 11 September 2025 00:46:50 +0000 (0:00:00.490) 0:02:15.946 **** 2025-09-11 00:48:08.977470 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:48:08.977478 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:48:08.977486 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:48:08.977494 | orchestrator | 2025-09-11 00:48:08.977502 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-09-11 00:48:08.977509 | orchestrator | Thursday 11 September 2025 00:46:51 +0000 (0:00:00.654) 0:02:16.600 **** 2025-09-11 00:48:08.977517 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:48:08.977525 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:48:08.977533 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:48:08.977541 | orchestrator | 2025-09-11 00:48:08.977548 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-09-11 00:48:08.977556 | orchestrator | Thursday 11 September 2025 00:46:51 +0000 (0:00:00.338) 0:02:16.939 **** 2025-09-11 00:48:08.977564 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:48:08.977572 | orchestrator | 2025-09-11 00:48:08.977580 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-09-11 00:48:08.977588 | orchestrator | Thursday 11 September 2025 00:46:52 +0000 (0:00:00.600) 0:02:17.540 **** 2025-09-11 00:48:08.977596 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:48:08.977604 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:48:08.977616 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:48:08.977623 | orchestrator | 2025-09-11 00:48:08.977631 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-09-11 00:48:08.977639 | orchestrator | Thursday 11 September 2025 00:46:52 +0000 (0:00:00.278) 0:02:17.818 **** 2025-09-11 00:48:08.977647 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:48:08.977655 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:48:08.977663 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:48:08.977671 | orchestrator | 2025-09-11 00:48:08.977679 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-09-11 00:48:08.977687 | orchestrator | Thursday 11 September 2025 00:46:52 +0000 (0:00:00.274) 0:02:18.093 **** 2025-09-11 00:48:08.977694 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:48:08.977702 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:48:08.977710 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:48:08.977718 | orchestrator | 2025-09-11 00:48:08.977726 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-09-11 00:48:08.977733 | orchestrator | Thursday 11 September 2025 00:46:53 +0000 (0:00:00.297) 0:02:18.391 **** 2025-09-11 00:48:08.977741 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:48:08.977749 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:48:08.977757 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:48:08.977765 | orchestrator | 2025-09-11 00:48:08.977773 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-09-11 00:48:08.977780 | orchestrator | Thursday 11 September 2025 00:46:53 +0000 (0:00:00.787) 0:02:19.178 **** 2025-09-11 00:48:08.977788 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:48:08.977796 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:48:08.977804 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:48:08.977812 | orchestrator | 2025-09-11 00:48:08.977819 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-09-11 00:48:08.977827 | orchestrator | Thursday 11 September 2025 00:46:54 +0000 (0:00:01.129) 0:02:20.308 **** 2025-09-11 00:48:08.977835 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:48:08.977843 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:48:08.977851 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:48:08.977868 | orchestrator | 2025-09-11 00:48:08.977876 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-09-11 00:48:08.977884 | orchestrator | Thursday 11 September 2025 00:46:56 +0000 (0:00:01.161) 0:02:21.469 **** 2025-09-11 00:48:08.977892 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:48:08.977900 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:48:08.977908 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:48:08.977916 | orchestrator | 2025-09-11 00:48:08.977928 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-11 00:48:08.977936 | orchestrator | 2025-09-11 00:48:08.977944 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-11 00:48:08.977951 | orchestrator | Thursday 11 September 2025 00:47:08 +0000 (0:00:12.384) 0:02:33.853 **** 2025-09-11 00:48:08.977959 | orchestrator | ok: [testbed-manager] 2025-09-11 00:48:08.977967 | orchestrator | 2025-09-11 00:48:08.977975 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-11 00:48:08.977986 | orchestrator | Thursday 11 September 2025 00:47:09 +0000 (0:00:00.893) 0:02:34.747 **** 2025-09-11 00:48:08.977994 | orchestrator | changed: [testbed-manager] 2025-09-11 00:48:08.978002 | orchestrator | 2025-09-11 00:48:08.978010 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-11 00:48:08.978042 | orchestrator | Thursday 11 September 2025 00:47:09 +0000 (0:00:00.360) 0:02:35.107 **** 2025-09-11 00:48:08.978050 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-11 00:48:08.978058 | orchestrator | 2025-09-11 00:48:08.978066 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-11 00:48:08.978074 | orchestrator | Thursday 11 September 2025 00:47:10 +0000 (0:00:00.614) 0:02:35.722 **** 2025-09-11 00:48:08.978086 | orchestrator | changed: [testbed-manager] 2025-09-11 00:48:08.978094 | orchestrator | 2025-09-11 00:48:08.978102 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-11 00:48:08.978110 | orchestrator | Thursday 11 September 2025 00:47:11 +0000 (0:00:00.793) 0:02:36.516 **** 2025-09-11 00:48:08.978118 | orchestrator | changed: [testbed-manager] 2025-09-11 00:48:08.978125 | orchestrator | 2025-09-11 00:48:08.978133 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-11 00:48:08.978141 | orchestrator | Thursday 11 September 2025 00:47:11 +0000 (0:00:00.611) 0:02:37.128 **** 2025-09-11 00:48:08.978149 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-11 00:48:08.978157 | orchestrator | 2025-09-11 00:48:08.978164 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-11 00:48:08.978172 | orchestrator | Thursday 11 September 2025 00:47:12 +0000 (0:00:01.204) 0:02:38.332 **** 2025-09-11 00:48:08.978180 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-11 00:48:08.978188 | orchestrator | 2025-09-11 00:48:08.978196 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-11 00:48:08.978203 | orchestrator | Thursday 11 September 2025 00:47:13 +0000 (0:00:00.952) 0:02:39.285 **** 2025-09-11 00:48:08.978211 | orchestrator | changed: [testbed-manager] 2025-09-11 00:48:08.978219 | orchestrator | 2025-09-11 00:48:08.978227 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-11 00:48:08.978234 | orchestrator | Thursday 11 September 2025 00:47:14 +0000 (0:00:00.400) 0:02:39.686 **** 2025-09-11 00:48:08.978242 | orchestrator | changed: [testbed-manager] 2025-09-11 00:48:08.978250 | orchestrator | 2025-09-11 00:48:08.978258 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-09-11 00:48:08.978266 | orchestrator | 2025-09-11 00:48:08.978273 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-09-11 00:48:08.978281 | orchestrator | Thursday 11 September 2025 00:47:14 +0000 (0:00:00.587) 0:02:40.273 **** 2025-09-11 00:48:08.978289 | orchestrator | ok: [testbed-manager] 2025-09-11 00:48:08.978297 | orchestrator | 2025-09-11 00:48:08.978305 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-09-11 00:48:08.978313 | orchestrator | Thursday 11 September 2025 00:47:15 +0000 (0:00:00.151) 0:02:40.425 **** 2025-09-11 00:48:08.978320 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-09-11 00:48:08.978328 | orchestrator | 2025-09-11 00:48:08.978336 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-09-11 00:48:08.978344 | orchestrator | Thursday 11 September 2025 00:47:15 +0000 (0:00:00.301) 0:02:40.727 **** 2025-09-11 00:48:08.978352 | orchestrator | ok: [testbed-manager] 2025-09-11 00:48:08.978360 | orchestrator | 2025-09-11 00:48:08.978368 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-09-11 00:48:08.978375 | orchestrator | Thursday 11 September 2025 00:47:16 +0000 (0:00:00.800) 0:02:41.528 **** 2025-09-11 00:48:08.978383 | orchestrator | ok: [testbed-manager] 2025-09-11 00:48:08.978391 | orchestrator | 2025-09-11 00:48:08.978399 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-09-11 00:48:08.978407 | orchestrator | Thursday 11 September 2025 00:47:17 +0000 (0:00:01.526) 0:02:43.055 **** 2025-09-11 00:48:08.978414 | orchestrator | changed: [testbed-manager] 2025-09-11 00:48:08.978422 | orchestrator | 2025-09-11 00:48:08.978430 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-09-11 00:48:08.978438 | orchestrator | Thursday 11 September 2025 00:47:18 +0000 (0:00:00.678) 0:02:43.733 **** 2025-09-11 00:48:08.978445 | orchestrator | ok: [testbed-manager] 2025-09-11 00:48:08.978453 | orchestrator | 2025-09-11 00:48:08.978461 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-09-11 00:48:08.978469 | orchestrator | Thursday 11 September 2025 00:47:18 +0000 (0:00:00.381) 0:02:44.114 **** 2025-09-11 00:48:08.978477 | orchestrator | changed: [testbed-manager] 2025-09-11 00:48:08.978488 | orchestrator | 2025-09-11 00:48:08.978496 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-09-11 00:48:08.978504 | orchestrator | Thursday 11 September 2025 00:47:25 +0000 (0:00:06.328) 0:02:50.443 **** 2025-09-11 00:48:08.978512 | orchestrator | changed: [testbed-manager] 2025-09-11 00:48:08.978520 | orchestrator | 2025-09-11 00:48:08.978528 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-09-11 00:48:08.978535 | orchestrator | Thursday 11 September 2025 00:47:38 +0000 (0:00:13.400) 0:03:03.844 **** 2025-09-11 00:48:08.978543 | orchestrator | ok: [testbed-manager] 2025-09-11 00:48:08.978551 | orchestrator | 2025-09-11 00:48:08.978559 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-09-11 00:48:08.978566 | orchestrator | 2025-09-11 00:48:08.978574 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-09-11 00:48:08.978587 | orchestrator | Thursday 11 September 2025 00:47:39 +0000 (0:00:00.546) 0:03:04.390 **** 2025-09-11 00:48:08.978595 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:48:08.978603 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:48:08.978610 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:48:08.978618 | orchestrator | 2025-09-11 00:48:08.978626 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-09-11 00:48:08.978634 | orchestrator | Thursday 11 September 2025 00:47:39 +0000 (0:00:00.350) 0:03:04.740 **** 2025-09-11 00:48:08.978642 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.978650 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:48:08.978657 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:48:08.978665 | orchestrator | 2025-09-11 00:48:08.978677 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-09-11 00:48:08.978685 | orchestrator | Thursday 11 September 2025 00:47:39 +0000 (0:00:00.336) 0:03:05.077 **** 2025-09-11 00:48:08.978692 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:48:08.978700 | orchestrator | 2025-09-11 00:48:08.978708 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-09-11 00:48:08.978716 | orchestrator | Thursday 11 September 2025 00:47:40 +0000 (0:00:00.611) 0:03:05.688 **** 2025-09-11 00:48:08.978724 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.978731 | orchestrator | 2025-09-11 00:48:08.978739 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-09-11 00:48:08.978747 | orchestrator | Thursday 11 September 2025 00:47:40 +0000 (0:00:00.190) 0:03:05.878 **** 2025-09-11 00:48:08.978755 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.978763 | orchestrator | 2025-09-11 00:48:08.978770 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-09-11 00:48:08.978778 | orchestrator | Thursday 11 September 2025 00:47:40 +0000 (0:00:00.234) 0:03:06.113 **** 2025-09-11 00:48:08.978786 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.978794 | orchestrator | 2025-09-11 00:48:08.978801 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-09-11 00:48:08.978809 | orchestrator | Thursday 11 September 2025 00:47:41 +0000 (0:00:00.239) 0:03:06.352 **** 2025-09-11 00:48:08.978817 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.978825 | orchestrator | 2025-09-11 00:48:08.978833 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-09-11 00:48:08.978840 | orchestrator | Thursday 11 September 2025 00:47:41 +0000 (0:00:00.258) 0:03:06.611 **** 2025-09-11 00:48:08.978848 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.978868 | orchestrator | 2025-09-11 00:48:08.978876 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-09-11 00:48:08.978884 | orchestrator | Thursday 11 September 2025 00:47:41 +0000 (0:00:00.181) 0:03:06.792 **** 2025-09-11 00:48:08.978892 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.978899 | orchestrator | 2025-09-11 00:48:08.978907 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-09-11 00:48:08.978919 | orchestrator | Thursday 11 September 2025 00:47:41 +0000 (0:00:00.272) 0:03:07.065 **** 2025-09-11 00:48:08.978927 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.978935 | orchestrator | 2025-09-11 00:48:08.978943 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-09-11 00:48:08.978951 | orchestrator | Thursday 11 September 2025 00:47:41 +0000 (0:00:00.216) 0:03:07.281 **** 2025-09-11 00:48:08.978958 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.978966 | orchestrator | 2025-09-11 00:48:08.978974 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-09-11 00:48:08.978982 | orchestrator | Thursday 11 September 2025 00:47:42 +0000 (0:00:00.183) 0:03:07.465 **** 2025-09-11 00:48:08.978989 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.978997 | orchestrator | 2025-09-11 00:48:08.979005 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-09-11 00:48:08.979013 | orchestrator | Thursday 11 September 2025 00:47:42 +0000 (0:00:00.192) 0:03:07.657 **** 2025-09-11 00:48:08.979021 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-09-11 00:48:08.979029 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-09-11 00:48:08.979036 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.979044 | orchestrator | 2025-09-11 00:48:08.979052 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-09-11 00:48:08.979060 | orchestrator | Thursday 11 September 2025 00:47:42 +0000 (0:00:00.532) 0:03:08.190 **** 2025-09-11 00:48:08.979068 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.979075 | orchestrator | 2025-09-11 00:48:08.979083 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-09-11 00:48:08.979091 | orchestrator | Thursday 11 September 2025 00:47:43 +0000 (0:00:00.186) 0:03:08.377 **** 2025-09-11 00:48:08.979098 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.979106 | orchestrator | 2025-09-11 00:48:08.979114 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-09-11 00:48:08.979122 | orchestrator | Thursday 11 September 2025 00:47:43 +0000 (0:00:00.233) 0:03:08.611 **** 2025-09-11 00:48:08.979130 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.979137 | orchestrator | 2025-09-11 00:48:08.979145 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-09-11 00:48:08.979153 | orchestrator | Thursday 11 September 2025 00:47:43 +0000 (0:00:00.270) 0:03:08.881 **** 2025-09-11 00:48:08.979161 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.979168 | orchestrator | 2025-09-11 00:48:08.979176 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-09-11 00:48:08.979184 | orchestrator | Thursday 11 September 2025 00:47:43 +0000 (0:00:00.170) 0:03:09.051 **** 2025-09-11 00:48:08.979192 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.979200 | orchestrator | 2025-09-11 00:48:08.979208 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-09-11 00:48:08.979215 | orchestrator | Thursday 11 September 2025 00:47:43 +0000 (0:00:00.194) 0:03:09.245 **** 2025-09-11 00:48:08.979223 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.979231 | orchestrator | 2025-09-11 00:48:08.979239 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-09-11 00:48:08.979251 | orchestrator | Thursday 11 September 2025 00:47:44 +0000 (0:00:00.229) 0:03:09.474 **** 2025-09-11 00:48:08.979259 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.979267 | orchestrator | 2025-09-11 00:48:08.979275 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-09-11 00:48:08.979282 | orchestrator | Thursday 11 September 2025 00:47:44 +0000 (0:00:00.248) 0:03:09.723 **** 2025-09-11 00:48:08.979290 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.979298 | orchestrator | 2025-09-11 00:48:08.979306 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-09-11 00:48:08.979317 | orchestrator | Thursday 11 September 2025 00:47:44 +0000 (0:00:00.193) 0:03:09.917 **** 2025-09-11 00:48:08.979329 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.979337 | orchestrator | 2025-09-11 00:48:08.979345 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-09-11 00:48:08.979352 | orchestrator | Thursday 11 September 2025 00:47:44 +0000 (0:00:00.174) 0:03:10.091 **** 2025-09-11 00:48:08.979360 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.979368 | orchestrator | 2025-09-11 00:48:08.979376 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-09-11 00:48:08.979383 | orchestrator | Thursday 11 September 2025 00:47:45 +0000 (0:00:00.270) 0:03:10.361 **** 2025-09-11 00:48:08.979391 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.979399 | orchestrator | 2025-09-11 00:48:08.979407 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-09-11 00:48:08.979414 | orchestrator | Thursday 11 September 2025 00:47:45 +0000 (0:00:00.301) 0:03:10.662 **** 2025-09-11 00:48:08.979422 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-09-11 00:48:08.979430 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-09-11 00:48:08.979438 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-09-11 00:48:08.979446 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-09-11 00:48:08.979454 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.979461 | orchestrator | 2025-09-11 00:48:08.979469 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-09-11 00:48:08.979477 | orchestrator | Thursday 11 September 2025 00:47:46 +0000 (0:00:00.842) 0:03:11.505 **** 2025-09-11 00:48:08.979485 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.979492 | orchestrator | 2025-09-11 00:48:08.979500 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-09-11 00:48:08.979508 | orchestrator | Thursday 11 September 2025 00:47:46 +0000 (0:00:00.256) 0:03:11.761 **** 2025-09-11 00:48:08.979516 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.979523 | orchestrator | 2025-09-11 00:48:08.979531 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-09-11 00:48:08.979539 | orchestrator | Thursday 11 September 2025 00:47:46 +0000 (0:00:00.188) 0:03:11.950 **** 2025-09-11 00:48:08.979547 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.979554 | orchestrator | 2025-09-11 00:48:08.979562 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-09-11 00:48:08.979570 | orchestrator | Thursday 11 September 2025 00:47:46 +0000 (0:00:00.188) 0:03:12.138 **** 2025-09-11 00:48:08.979578 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.979585 | orchestrator | 2025-09-11 00:48:08.979593 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-09-11 00:48:08.979601 | orchestrator | Thursday 11 September 2025 00:47:46 +0000 (0:00:00.185) 0:03:12.323 **** 2025-09-11 00:48:08.979609 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-09-11 00:48:08.979616 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-09-11 00:48:08.979624 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.979632 | orchestrator | 2025-09-11 00:48:08.979640 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-09-11 00:48:08.979647 | orchestrator | Thursday 11 September 2025 00:47:47 +0000 (0:00:00.276) 0:03:12.600 **** 2025-09-11 00:48:08.979655 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.979663 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:48:08.979671 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:48:08.979679 | orchestrator | 2025-09-11 00:48:08.979686 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-09-11 00:48:08.979694 | orchestrator | Thursday 11 September 2025 00:47:47 +0000 (0:00:00.286) 0:03:12.887 **** 2025-09-11 00:48:08.979702 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:48:08.979710 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:48:08.979722 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:48:08.979730 | orchestrator | 2025-09-11 00:48:08.979738 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-09-11 00:48:08.979746 | orchestrator | 2025-09-11 00:48:08.979753 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-09-11 00:48:08.979761 | orchestrator | Thursday 11 September 2025 00:47:48 +0000 (0:00:00.963) 0:03:13.850 **** 2025-09-11 00:48:08.979769 | orchestrator | ok: [testbed-manager] 2025-09-11 00:48:08.979777 | orchestrator | 2025-09-11 00:48:08.979784 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-09-11 00:48:08.979792 | orchestrator | Thursday 11 September 2025 00:47:48 +0000 (0:00:00.136) 0:03:13.987 **** 2025-09-11 00:48:08.979800 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-09-11 00:48:08.979808 | orchestrator | 2025-09-11 00:48:08.979816 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-09-11 00:48:08.979823 | orchestrator | Thursday 11 September 2025 00:47:48 +0000 (0:00:00.239) 0:03:14.227 **** 2025-09-11 00:48:08.979831 | orchestrator | changed: [testbed-manager] 2025-09-11 00:48:08.979839 | orchestrator | 2025-09-11 00:48:08.979847 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-09-11 00:48:08.979871 | orchestrator | 2025-09-11 00:48:08.979880 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-09-11 00:48:08.979892 | orchestrator | Thursday 11 September 2025 00:47:53 +0000 (0:00:04.859) 0:03:19.086 **** 2025-09-11 00:48:08.979900 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:48:08.979907 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:48:08.979915 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:48:08.979923 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:48:08.979931 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:48:08.979939 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:48:08.979946 | orchestrator | 2025-09-11 00:48:08.979954 | orchestrator | TASK [Manage labels] *********************************************************** 2025-09-11 00:48:08.979962 | orchestrator | Thursday 11 September 2025 00:47:54 +0000 (0:00:00.711) 0:03:19.798 **** 2025-09-11 00:48:08.979976 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-11 00:48:08.979984 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-11 00:48:08.979992 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-11 00:48:08.979999 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-11 00:48:08.980007 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-11 00:48:08.980015 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-11 00:48:08.980023 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-11 00:48:08.980031 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-11 00:48:08.980039 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-11 00:48:08.980046 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-11 00:48:08.980054 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-11 00:48:08.980062 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-11 00:48:08.980070 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-11 00:48:08.980077 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-11 00:48:08.980085 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-11 00:48:08.980093 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-11 00:48:08.980106 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-11 00:48:08.980113 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-11 00:48:08.980121 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-11 00:48:08.980129 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-11 00:48:08.980137 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-11 00:48:08.980145 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-11 00:48:08.980152 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-11 00:48:08.980160 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-11 00:48:08.980168 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-11 00:48:08.980176 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-11 00:48:08.980183 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-11 00:48:08.980191 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-11 00:48:08.980199 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-11 00:48:08.980207 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-11 00:48:08.980215 | orchestrator | 2025-09-11 00:48:08.980222 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-09-11 00:48:08.980230 | orchestrator | Thursday 11 September 2025 00:48:05 +0000 (0:00:10.585) 0:03:30.384 **** 2025-09-11 00:48:08.980238 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:48:08.980246 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:48:08.980254 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:48:08.980261 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.980269 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:48:08.980277 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:48:08.980285 | orchestrator | 2025-09-11 00:48:08.980293 | orchestrator | TASK [Manage taints] *********************************************************** 2025-09-11 00:48:08.980300 | orchestrator | Thursday 11 September 2025 00:48:05 +0000 (0:00:00.629) 0:03:31.014 **** 2025-09-11 00:48:08.980308 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:48:08.980316 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:48:08.980324 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:48:08.980332 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:48:08.980339 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:48:08.980347 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:48:08.980355 | orchestrator | 2025-09-11 00:48:08.980363 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:48:08.980375 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:48:08.980383 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-09-11 00:48:08.980391 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-11 00:48:08.980403 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-11 00:48:08.980411 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-11 00:48:08.980423 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-11 00:48:08.980431 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-11 00:48:08.980439 | orchestrator | 2025-09-11 00:48:08.980447 | orchestrator | 2025-09-11 00:48:08.980454 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:48:08.980462 | orchestrator | Thursday 11 September 2025 00:48:06 +0000 (0:00:00.446) 0:03:31.461 **** 2025-09-11 00:48:08.980470 | orchestrator | =============================================================================== 2025-09-11 00:48:08.980478 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 55.28s 2025-09-11 00:48:08.980486 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 26.10s 2025-09-11 00:48:08.980494 | orchestrator | kubectl : Install required packages ------------------------------------ 13.40s 2025-09-11 00:48:08.980501 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 12.38s 2025-09-11 00:48:08.980509 | orchestrator | Manage labels ---------------------------------------------------------- 10.59s 2025-09-11 00:48:08.980517 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.33s 2025-09-11 00:48:08.980525 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.00s 2025-09-11 00:48:08.980533 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 4.86s 2025-09-11 00:48:08.980541 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.23s 2025-09-11 00:48:08.980548 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 2.14s 2025-09-11 00:48:08.980556 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.94s 2025-09-11 00:48:08.980564 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 1.76s 2025-09-11 00:48:08.980572 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 1.76s 2025-09-11 00:48:08.980579 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.75s 2025-09-11 00:48:08.980587 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.70s 2025-09-11 00:48:08.980595 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.53s 2025-09-11 00:48:08.980603 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 1.40s 2025-09-11 00:48:08.980610 | orchestrator | k3s_prereq : Load br_netfilter ------------------------------------------ 1.38s 2025-09-11 00:48:08.980618 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.29s 2025-09-11 00:48:08.980626 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.25s 2025-09-11 00:48:11.998352 | orchestrator | 2025-09-11 00:48:11 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:48:11.998443 | orchestrator | 2025-09-11 00:48:11 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:48:11.999382 | orchestrator | 2025-09-11 00:48:12 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:48:12.000258 | orchestrator | 2025-09-11 00:48:12 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:48:12.001006 | orchestrator | 2025-09-11 00:48:12 | INFO  | Task cc346b06-190b-4f2c-9a93-2d6cf7fed879 is in state STARTED 2025-09-11 00:48:12.002505 | orchestrator | 2025-09-11 00:48:12 | INFO  | Task c3eec48f-145d-4d94-98a9-f074136b5645 is in state STARTED 2025-09-11 00:48:12.002531 | orchestrator | 2025-09-11 00:48:12 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:48:15.036098 | orchestrator | 2025-09-11 00:48:15 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:48:15.036641 | orchestrator | 2025-09-11 00:48:15 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:48:15.039217 | orchestrator | 2025-09-11 00:48:15 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:48:15.039250 | orchestrator | 2025-09-11 00:48:15 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:48:15.039262 | orchestrator | 2025-09-11 00:48:15 | INFO  | Task cc346b06-190b-4f2c-9a93-2d6cf7fed879 is in state SUCCESS 2025-09-11 00:48:15.039593 | orchestrator | 2025-09-11 00:48:15 | INFO  | Task c3eec48f-145d-4d94-98a9-f074136b5645 is in state STARTED 2025-09-11 00:48:15.039813 | orchestrator | 2025-09-11 00:48:15 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:48:18.067647 | orchestrator | 2025-09-11 00:48:18 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:48:18.068044 | orchestrator | 2025-09-11 00:48:18 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:48:18.068647 | orchestrator | 2025-09-11 00:48:18 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:48:18.069333 | orchestrator | 2025-09-11 00:48:18 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:48:18.069775 | orchestrator | 2025-09-11 00:48:18 | INFO  | Task c3eec48f-145d-4d94-98a9-f074136b5645 is in state SUCCESS 2025-09-11 00:48:18.069845 | orchestrator | 2025-09-11 00:48:18 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:48:21.092312 | orchestrator | 2025-09-11 00:48:21 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:48:21.092825 | orchestrator | 2025-09-11 00:48:21 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:48:21.093736 | orchestrator | 2025-09-11 00:48:21 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:48:21.094437 | orchestrator | 2025-09-11 00:48:21 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:48:21.094611 | orchestrator | 2025-09-11 00:48:21 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:48:24.130339 | orchestrator | 2025-09-11 00:48:24 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:48:24.132531 | orchestrator | 2025-09-11 00:48:24 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:48:24.136590 | orchestrator | 2025-09-11 00:48:24 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:48:24.138373 | orchestrator | 2025-09-11 00:48:24 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:48:24.138521 | orchestrator | 2025-09-11 00:48:24 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:48:27.174759 | orchestrator | 2025-09-11 00:48:27 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:48:27.174995 | orchestrator | 2025-09-11 00:48:27 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:48:27.175834 | orchestrator | 2025-09-11 00:48:27 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:48:27.176657 | orchestrator | 2025-09-11 00:48:27 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:48:27.176774 | orchestrator | 2025-09-11 00:48:27 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:48:30.215677 | orchestrator | 2025-09-11 00:48:30 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:48:30.217359 | orchestrator | 2025-09-11 00:48:30 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:48:30.219227 | orchestrator | 2025-09-11 00:48:30 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:48:30.222560 | orchestrator | 2025-09-11 00:48:30 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:48:30.222588 | orchestrator | 2025-09-11 00:48:30 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:48:33.255191 | orchestrator | 2025-09-11 00:48:33 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:48:33.256819 | orchestrator | 2025-09-11 00:48:33 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:48:33.258464 | orchestrator | 2025-09-11 00:48:33 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:48:33.259931 | orchestrator | 2025-09-11 00:48:33 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:48:33.260133 | orchestrator | 2025-09-11 00:48:33 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:48:36.332526 | orchestrator | 2025-09-11 00:48:36 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:48:36.333145 | orchestrator | 2025-09-11 00:48:36 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:48:36.333864 | orchestrator | 2025-09-11 00:48:36 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:48:36.334733 | orchestrator | 2025-09-11 00:48:36 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:48:36.334760 | orchestrator | 2025-09-11 00:48:36 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:48:39.372459 | orchestrator | 2025-09-11 00:48:39 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:48:39.374465 | orchestrator | 2025-09-11 00:48:39 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:48:39.375615 | orchestrator | 2025-09-11 00:48:39 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:48:39.376685 | orchestrator | 2025-09-11 00:48:39 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:48:39.376713 | orchestrator | 2025-09-11 00:48:39 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:48:42.413702 | orchestrator | 2025-09-11 00:48:42 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:48:42.413791 | orchestrator | 2025-09-11 00:48:42 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:48:42.413801 | orchestrator | 2025-09-11 00:48:42 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:48:42.415016 | orchestrator | 2025-09-11 00:48:42 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:48:42.415110 | orchestrator | 2025-09-11 00:48:42 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:48:45.455134 | orchestrator | 2025-09-11 00:48:45 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:48:45.455242 | orchestrator | 2025-09-11 00:48:45 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:48:45.456114 | orchestrator | 2025-09-11 00:48:45 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:48:45.456754 | orchestrator | 2025-09-11 00:48:45 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:48:45.456831 | orchestrator | 2025-09-11 00:48:45 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:48:48.494375 | orchestrator | 2025-09-11 00:48:48 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:48:48.495614 | orchestrator | 2025-09-11 00:48:48 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:48:48.498430 | orchestrator | 2025-09-11 00:48:48 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:48:48.500840 | orchestrator | 2025-09-11 00:48:48 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:48:48.500863 | orchestrator | 2025-09-11 00:48:48 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:48:51.545651 | orchestrator | 2025-09-11 00:48:51 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:48:51.547548 | orchestrator | 2025-09-11 00:48:51 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:48:51.550007 | orchestrator | 2025-09-11 00:48:51 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:48:51.552213 | orchestrator | 2025-09-11 00:48:51 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:48:51.552591 | orchestrator | 2025-09-11 00:48:51 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:48:54.589372 | orchestrator | 2025-09-11 00:48:54 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:48:54.589967 | orchestrator | 2025-09-11 00:48:54 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:48:54.593351 | orchestrator | 2025-09-11 00:48:54 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:48:54.593365 | orchestrator | 2025-09-11 00:48:54 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:48:54.593372 | orchestrator | 2025-09-11 00:48:54 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:48:57.630776 | orchestrator | 2025-09-11 00:48:57 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:48:57.632837 | orchestrator | 2025-09-11 00:48:57 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:48:57.635195 | orchestrator | 2025-09-11 00:48:57 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:48:57.637589 | orchestrator | 2025-09-11 00:48:57 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:48:57.637816 | orchestrator | 2025-09-11 00:48:57 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:49:00.676569 | orchestrator | 2025-09-11 00:49:00 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:49:00.677095 | orchestrator | 2025-09-11 00:49:00 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:49:00.678572 | orchestrator | 2025-09-11 00:49:00 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:49:00.679638 | orchestrator | 2025-09-11 00:49:00 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:49:00.680140 | orchestrator | 2025-09-11 00:49:00 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:49:03.720512 | orchestrator | 2025-09-11 00:49:03 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:49:03.720715 | orchestrator | 2025-09-11 00:49:03 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:49:03.722191 | orchestrator | 2025-09-11 00:49:03 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:49:03.722320 | orchestrator | 2025-09-11 00:49:03 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:49:03.722456 | orchestrator | 2025-09-11 00:49:03 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:49:06.763316 | orchestrator | 2025-09-11 00:49:06 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:49:06.764683 | orchestrator | 2025-09-11 00:49:06 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:49:06.766275 | orchestrator | 2025-09-11 00:49:06 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:49:06.767630 | orchestrator | 2025-09-11 00:49:06 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:49:06.767662 | orchestrator | 2025-09-11 00:49:06 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:49:09.817030 | orchestrator | 2025-09-11 00:49:09 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:49:09.819044 | orchestrator | 2025-09-11 00:49:09 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:49:09.821459 | orchestrator | 2025-09-11 00:49:09 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:49:09.823595 | orchestrator | 2025-09-11 00:49:09 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:49:09.824133 | orchestrator | 2025-09-11 00:49:09 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:49:12.853290 | orchestrator | 2025-09-11 00:49:12 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:49:12.854928 | orchestrator | 2025-09-11 00:49:12 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:49:12.856309 | orchestrator | 2025-09-11 00:49:12 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:49:12.856334 | orchestrator | 2025-09-11 00:49:12 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:49:12.856346 | orchestrator | 2025-09-11 00:49:12 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:49:15.883510 | orchestrator | 2025-09-11 00:49:15 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:49:15.883924 | orchestrator | 2025-09-11 00:49:15 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:49:15.884752 | orchestrator | 2025-09-11 00:49:15 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:49:15.885552 | orchestrator | 2025-09-11 00:49:15 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:49:15.885568 | orchestrator | 2025-09-11 00:49:15 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:49:18.911666 | orchestrator | 2025-09-11 00:49:18 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:49:18.911755 | orchestrator | 2025-09-11 00:49:18 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:49:18.912042 | orchestrator | 2025-09-11 00:49:18 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:49:18.913916 | orchestrator | 2025-09-11 00:49:18 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:49:18.913942 | orchestrator | 2025-09-11 00:49:18 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:49:21.938066 | orchestrator | 2025-09-11 00:49:21 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:49:21.939035 | orchestrator | 2025-09-11 00:49:21 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:49:21.941375 | orchestrator | 2025-09-11 00:49:21 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:49:21.942538 | orchestrator | 2025-09-11 00:49:21 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:49:21.942791 | orchestrator | 2025-09-11 00:49:21 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:49:24.967820 | orchestrator | 2025-09-11 00:49:24 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:49:24.968117 | orchestrator | 2025-09-11 00:49:24 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:49:24.969227 | orchestrator | 2025-09-11 00:49:24 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:49:24.969380 | orchestrator | 2025-09-11 00:49:24 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:49:24.970315 | orchestrator | 2025-09-11 00:49:24 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:49:28.003612 | orchestrator | 2025-09-11 00:49:28 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:49:28.004277 | orchestrator | 2025-09-11 00:49:28 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:49:28.006298 | orchestrator | 2025-09-11 00:49:28 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:49:28.007778 | orchestrator | 2025-09-11 00:49:28 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:49:28.007812 | orchestrator | 2025-09-11 00:49:28 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:49:31.058127 | orchestrator | 2025-09-11 00:49:31 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:49:31.058894 | orchestrator | 2025-09-11 00:49:31 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:49:31.060830 | orchestrator | 2025-09-11 00:49:31 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:49:31.061839 | orchestrator | 2025-09-11 00:49:31 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:49:31.062005 | orchestrator | 2025-09-11 00:49:31 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:49:34.097847 | orchestrator | 2025-09-11 00:49:34 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:49:34.098184 | orchestrator | 2025-09-11 00:49:34 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:49:34.099398 | orchestrator | 2025-09-11 00:49:34 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:49:34.099952 | orchestrator | 2025-09-11 00:49:34 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state STARTED 2025-09-11 00:49:34.100183 | orchestrator | 2025-09-11 00:49:34 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:49:37.143895 | orchestrator | 2025-09-11 00:49:37 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:49:37.145447 | orchestrator | 2025-09-11 00:49:37 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:49:37.147521 | orchestrator | 2025-09-11 00:49:37 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:49:37.150165 | orchestrator | 2025-09-11 00:49:37 | INFO  | Task d695ac8b-8f5d-4bc8-9a0d-a9797622ba78 is in state SUCCESS 2025-09-11 00:49:37.151316 | orchestrator | 2025-09-11 00:49:37.151354 | orchestrator | 2025-09-11 00:49:37.151367 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-09-11 00:49:37.151378 | orchestrator | 2025-09-11 00:49:37.151390 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-11 00:49:37.151401 | orchestrator | Thursday 11 September 2025 00:48:10 +0000 (0:00:00.120) 0:00:00.120 **** 2025-09-11 00:49:37.151412 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-11 00:49:37.151424 | orchestrator | 2025-09-11 00:49:37.151435 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-11 00:49:37.151446 | orchestrator | Thursday 11 September 2025 00:48:11 +0000 (0:00:00.715) 0:00:00.836 **** 2025-09-11 00:49:37.151457 | orchestrator | changed: [testbed-manager] 2025-09-11 00:49:37.151468 | orchestrator | 2025-09-11 00:49:37.151478 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-09-11 00:49:37.151489 | orchestrator | Thursday 11 September 2025 00:48:12 +0000 (0:00:01.053) 0:00:01.890 **** 2025-09-11 00:49:37.151500 | orchestrator | changed: [testbed-manager] 2025-09-11 00:49:37.151510 | orchestrator | 2025-09-11 00:49:37.151521 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:49:37.151532 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:49:37.151546 | orchestrator | 2025-09-11 00:49:37.151557 | orchestrator | 2025-09-11 00:49:37.151567 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:49:37.151578 | orchestrator | Thursday 11 September 2025 00:48:12 +0000 (0:00:00.335) 0:00:02.225 **** 2025-09-11 00:49:37.151589 | orchestrator | =============================================================================== 2025-09-11 00:49:37.151599 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.05s 2025-09-11 00:49:37.151610 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.72s 2025-09-11 00:49:37.151621 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.34s 2025-09-11 00:49:37.151631 | orchestrator | 2025-09-11 00:49:37.151642 | orchestrator | 2025-09-11 00:49:37.151653 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-11 00:49:37.151663 | orchestrator | 2025-09-11 00:49:37.151674 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-11 00:49:37.151685 | orchestrator | Thursday 11 September 2025 00:48:10 +0000 (0:00:00.125) 0:00:00.125 **** 2025-09-11 00:49:37.151695 | orchestrator | ok: [testbed-manager] 2025-09-11 00:49:37.151707 | orchestrator | 2025-09-11 00:49:37.151718 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-11 00:49:37.151728 | orchestrator | Thursday 11 September 2025 00:48:11 +0000 (0:00:00.526) 0:00:00.652 **** 2025-09-11 00:49:37.151739 | orchestrator | ok: [testbed-manager] 2025-09-11 00:49:37.151750 | orchestrator | 2025-09-11 00:49:37.151761 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-11 00:49:37.151772 | orchestrator | Thursday 11 September 2025 00:48:11 +0000 (0:00:00.546) 0:00:01.199 **** 2025-09-11 00:49:37.151783 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-11 00:49:37.151793 | orchestrator | 2025-09-11 00:49:37.151804 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-11 00:49:37.151814 | orchestrator | Thursday 11 September 2025 00:48:12 +0000 (0:00:00.712) 0:00:01.911 **** 2025-09-11 00:49:37.151825 | orchestrator | changed: [testbed-manager] 2025-09-11 00:49:37.151836 | orchestrator | 2025-09-11 00:49:37.151846 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-11 00:49:37.151874 | orchestrator | Thursday 11 September 2025 00:48:13 +0000 (0:00:00.809) 0:00:02.720 **** 2025-09-11 00:49:37.151885 | orchestrator | changed: [testbed-manager] 2025-09-11 00:49:37.151896 | orchestrator | 2025-09-11 00:49:37.151907 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-11 00:49:37.151930 | orchestrator | Thursday 11 September 2025 00:48:13 +0000 (0:00:00.621) 0:00:03.342 **** 2025-09-11 00:49:37.151941 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-11 00:49:37.151952 | orchestrator | 2025-09-11 00:49:37.151962 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-11 00:49:37.151973 | orchestrator | Thursday 11 September 2025 00:48:15 +0000 (0:00:01.488) 0:00:04.831 **** 2025-09-11 00:49:37.152005 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-11 00:49:37.152016 | orchestrator | 2025-09-11 00:49:37.152027 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-11 00:49:37.152037 | orchestrator | Thursday 11 September 2025 00:48:15 +0000 (0:00:00.616) 0:00:05.447 **** 2025-09-11 00:49:37.152048 | orchestrator | ok: [testbed-manager] 2025-09-11 00:49:37.152059 | orchestrator | 2025-09-11 00:49:37.152069 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-11 00:49:37.152080 | orchestrator | Thursday 11 September 2025 00:48:16 +0000 (0:00:00.327) 0:00:05.775 **** 2025-09-11 00:49:37.152091 | orchestrator | ok: [testbed-manager] 2025-09-11 00:49:37.152102 | orchestrator | 2025-09-11 00:49:37.152112 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:49:37.152123 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:49:37.152134 | orchestrator | 2025-09-11 00:49:37.152146 | orchestrator | 2025-09-11 00:49:37.152156 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:49:37.152167 | orchestrator | Thursday 11 September 2025 00:48:16 +0000 (0:00:00.240) 0:00:06.016 **** 2025-09-11 00:49:37.152178 | orchestrator | =============================================================================== 2025-09-11 00:49:37.152189 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.49s 2025-09-11 00:49:37.152199 | orchestrator | Write kubeconfig file --------------------------------------------------- 0.81s 2025-09-11 00:49:37.152210 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.71s 2025-09-11 00:49:37.152233 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.62s 2025-09-11 00:49:37.152245 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.62s 2025-09-11 00:49:37.152255 | orchestrator | Create .kube directory -------------------------------------------------- 0.55s 2025-09-11 00:49:37.152266 | orchestrator | Get home directory of operator user ------------------------------------- 0.53s 2025-09-11 00:49:37.152277 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.33s 2025-09-11 00:49:37.152287 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.24s 2025-09-11 00:49:37.152298 | orchestrator | 2025-09-11 00:49:37.152309 | orchestrator | 2025-09-11 00:49:37.152320 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-09-11 00:49:37.152330 | orchestrator | 2025-09-11 00:49:37.152341 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-11 00:49:37.152352 | orchestrator | Thursday 11 September 2025 00:47:19 +0000 (0:00:00.081) 0:00:00.081 **** 2025-09-11 00:49:37.152362 | orchestrator | ok: [localhost] => { 2025-09-11 00:49:37.152374 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-09-11 00:49:37.152385 | orchestrator | } 2025-09-11 00:49:37.152396 | orchestrator | 2025-09-11 00:49:37.152407 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-09-11 00:49:37.152418 | orchestrator | Thursday 11 September 2025 00:47:19 +0000 (0:00:00.056) 0:00:00.137 **** 2025-09-11 00:49:37.152429 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-09-11 00:49:37.152442 | orchestrator | ...ignoring 2025-09-11 00:49:37.152454 | orchestrator | 2025-09-11 00:49:37.152465 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-09-11 00:49:37.152483 | orchestrator | Thursday 11 September 2025 00:47:22 +0000 (0:00:03.296) 0:00:03.434 **** 2025-09-11 00:49:37.152493 | orchestrator | skipping: [localhost] 2025-09-11 00:49:37.152504 | orchestrator | 2025-09-11 00:49:37.152515 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-09-11 00:49:37.152525 | orchestrator | Thursday 11 September 2025 00:47:22 +0000 (0:00:00.036) 0:00:03.470 **** 2025-09-11 00:49:37.152536 | orchestrator | ok: [localhost] 2025-09-11 00:49:37.152547 | orchestrator | 2025-09-11 00:49:37.152558 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-11 00:49:37.152568 | orchestrator | 2025-09-11 00:49:37.152579 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-11 00:49:37.152590 | orchestrator | Thursday 11 September 2025 00:47:22 +0000 (0:00:00.124) 0:00:03.594 **** 2025-09-11 00:49:37.152601 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:49:37.152612 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:49:37.152622 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:49:37.152633 | orchestrator | 2025-09-11 00:49:37.152644 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-11 00:49:37.152655 | orchestrator | Thursday 11 September 2025 00:47:23 +0000 (0:00:00.242) 0:00:03.837 **** 2025-09-11 00:49:37.152665 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-09-11 00:49:37.152676 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-09-11 00:49:37.152687 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-09-11 00:49:37.152698 | orchestrator | 2025-09-11 00:49:37.152709 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-09-11 00:49:37.152719 | orchestrator | 2025-09-11 00:49:37.152730 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-11 00:49:37.152746 | orchestrator | Thursday 11 September 2025 00:47:23 +0000 (0:00:00.396) 0:00:04.233 **** 2025-09-11 00:49:37.152758 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:49:37.152769 | orchestrator | 2025-09-11 00:49:37.152780 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-11 00:49:37.152790 | orchestrator | Thursday 11 September 2025 00:47:23 +0000 (0:00:00.586) 0:00:04.820 **** 2025-09-11 00:49:37.152801 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:49:37.152812 | orchestrator | 2025-09-11 00:49:37.152823 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-09-11 00:49:37.152834 | orchestrator | Thursday 11 September 2025 00:47:24 +0000 (0:00:00.909) 0:00:05.729 **** 2025-09-11 00:49:37.152844 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:49:37.152855 | orchestrator | 2025-09-11 00:49:37.152866 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-09-11 00:49:37.152877 | orchestrator | Thursday 11 September 2025 00:47:25 +0000 (0:00:00.319) 0:00:06.049 **** 2025-09-11 00:49:37.152887 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:49:37.152898 | orchestrator | 2025-09-11 00:49:37.152909 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-09-11 00:49:37.152920 | orchestrator | Thursday 11 September 2025 00:47:25 +0000 (0:00:00.372) 0:00:06.422 **** 2025-09-11 00:49:37.152930 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:49:37.152941 | orchestrator | 2025-09-11 00:49:37.152951 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-09-11 00:49:37.152962 | orchestrator | Thursday 11 September 2025 00:47:25 +0000 (0:00:00.305) 0:00:06.727 **** 2025-09-11 00:49:37.152973 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:49:37.153001 | orchestrator | 2025-09-11 00:49:37.153012 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-11 00:49:37.153023 | orchestrator | Thursday 11 September 2025 00:47:26 +0000 (0:00:00.357) 0:00:07.085 **** 2025-09-11 00:49:37.153034 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:49:37.153051 | orchestrator | 2025-09-11 00:49:37.153062 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-11 00:49:37.153078 | orchestrator | Thursday 11 September 2025 00:47:27 +0000 (0:00:00.767) 0:00:07.853 **** 2025-09-11 00:49:37.153089 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:49:37.153100 | orchestrator | 2025-09-11 00:49:37.153111 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-09-11 00:49:37.153122 | orchestrator | Thursday 11 September 2025 00:47:27 +0000 (0:00:00.794) 0:00:08.648 **** 2025-09-11 00:49:37.153132 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:49:37.153143 | orchestrator | 2025-09-11 00:49:37.153154 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-09-11 00:49:37.153165 | orchestrator | Thursday 11 September 2025 00:47:28 +0000 (0:00:00.481) 0:00:09.130 **** 2025-09-11 00:49:37.153175 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:49:37.153186 | orchestrator | 2025-09-11 00:49:37.153197 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-09-11 00:49:37.153207 | orchestrator | Thursday 11 September 2025 00:47:28 +0000 (0:00:00.670) 0:00:09.800 **** 2025-09-11 00:49:37.153224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-11 00:49:37.153246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-11 00:49:37.153261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-11 00:49:37.153279 | orchestrator | 2025-09-11 00:49:37.153291 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-09-11 00:49:37.153302 | orchestrator | Thursday 11 September 2025 00:47:30 +0000 (0:00:01.084) 0:00:10.885 **** 2025-09-11 00:49:37.153321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-11 00:49:37.153333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-11 00:49:37.153351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-11 00:49:37.153363 | orchestrator | 2025-09-11 00:49:37.153374 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-09-11 00:49:37.153385 | orchestrator | Thursday 11 September 2025 00:47:32 +0000 (0:00:01.971) 0:00:12.856 **** 2025-09-11 00:49:37.153403 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-11 00:49:37.153414 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-11 00:49:37.153425 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-11 00:49:37.153436 | orchestrator | 2025-09-11 00:49:37.153446 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-09-11 00:49:37.153457 | orchestrator | Thursday 11 September 2025 00:47:34 +0000 (0:00:02.692) 0:00:15.548 **** 2025-09-11 00:49:37.153468 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-11 00:49:37.153479 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-11 00:49:37.153489 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-11 00:49:37.153500 | orchestrator | 2025-09-11 00:49:37.153511 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-09-11 00:49:37.153526 | orchestrator | Thursday 11 September 2025 00:47:36 +0000 (0:00:02.185) 0:00:17.733 **** 2025-09-11 00:49:37.153538 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-11 00:49:37.153548 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-11 00:49:37.153559 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-11 00:49:37.153569 | orchestrator | 2025-09-11 00:49:37.153580 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-09-11 00:49:37.153591 | orchestrator | Thursday 11 September 2025 00:47:38 +0000 (0:00:01.606) 0:00:19.340 **** 2025-09-11 00:49:37.153602 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-11 00:49:37.153613 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-11 00:49:37.153623 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-11 00:49:37.153634 | orchestrator | 2025-09-11 00:49:37.153645 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-09-11 00:49:37.153655 | orchestrator | Thursday 11 September 2025 00:47:40 +0000 (0:00:02.078) 0:00:21.418 **** 2025-09-11 00:49:37.153666 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-11 00:49:37.153677 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-11 00:49:37.153688 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-11 00:49:37.153699 | orchestrator | 2025-09-11 00:49:37.153709 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-09-11 00:49:37.153720 | orchestrator | Thursday 11 September 2025 00:47:42 +0000 (0:00:01.931) 0:00:23.350 **** 2025-09-11 00:49:37.153731 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-11 00:49:37.153742 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-11 00:49:37.153753 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-11 00:49:37.153764 | orchestrator | 2025-09-11 00:49:37.153774 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-11 00:49:37.153785 | orchestrator | Thursday 11 September 2025 00:47:43 +0000 (0:00:01.449) 0:00:24.799 **** 2025-09-11 00:49:37.153796 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:49:37.153806 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:49:37.153817 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:49:37.153828 | orchestrator | 2025-09-11 00:49:37.153845 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-09-11 00:49:37.153855 | orchestrator | Thursday 11 September 2025 00:47:44 +0000 (0:00:00.537) 0:00:25.337 **** 2025-09-11 00:49:37.153872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-11 00:49:37.153891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-11 00:49:37.153904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-11 00:49:37.153916 | orchestrator | 2025-09-11 00:49:37.153926 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-09-11 00:49:37.153937 | orchestrator | Thursday 11 September 2025 00:47:46 +0000 (0:00:01.515) 0:00:26.853 **** 2025-09-11 00:49:37.153948 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:49:37.153958 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:49:37.153969 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:49:37.154009 | orchestrator | 2025-09-11 00:49:37.154071 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-09-11 00:49:37.154082 | orchestrator | Thursday 11 September 2025 00:47:46 +0000 (0:00:00.887) 0:00:27.740 **** 2025-09-11 00:49:37.154101 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:49:37.154112 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:49:37.154123 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:49:37.154133 | orchestrator | 2025-09-11 00:49:37.154144 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-09-11 00:49:37.154155 | orchestrator | Thursday 11 September 2025 00:47:53 +0000 (0:00:06.569) 0:00:34.309 **** 2025-09-11 00:49:37.154166 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:49:37.154177 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:49:37.154187 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:49:37.154198 | orchestrator | 2025-09-11 00:49:37.154209 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-11 00:49:37.154220 | orchestrator | 2025-09-11 00:49:37.154230 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-11 00:49:37.154241 | orchestrator | Thursday 11 September 2025 00:47:54 +0000 (0:00:00.723) 0:00:35.033 **** 2025-09-11 00:49:37.154252 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:49:37.154263 | orchestrator | 2025-09-11 00:49:37.154273 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-11 00:49:37.154284 | orchestrator | Thursday 11 September 2025 00:47:55 +0000 (0:00:00.890) 0:00:35.924 **** 2025-09-11 00:49:37.154295 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:49:37.154305 | orchestrator | 2025-09-11 00:49:37.154322 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-11 00:49:37.154333 | orchestrator | Thursday 11 September 2025 00:47:55 +0000 (0:00:00.541) 0:00:36.466 **** 2025-09-11 00:49:37.154344 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:49:37.154355 | orchestrator | 2025-09-11 00:49:37.154366 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-11 00:49:37.154377 | orchestrator | Thursday 11 September 2025 00:47:58 +0000 (0:00:03.247) 0:00:39.713 **** 2025-09-11 00:49:37.154387 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:49:37.154398 | orchestrator | 2025-09-11 00:49:37.154409 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-11 00:49:37.154420 | orchestrator | 2025-09-11 00:49:37.154430 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-11 00:49:37.154441 | orchestrator | Thursday 11 September 2025 00:48:54 +0000 (0:00:55.139) 0:01:34.853 **** 2025-09-11 00:49:37.154452 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:49:37.154462 | orchestrator | 2025-09-11 00:49:37.154473 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-11 00:49:37.154484 | orchestrator | Thursday 11 September 2025 00:48:54 +0000 (0:00:00.549) 0:01:35.402 **** 2025-09-11 00:49:37.154495 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:49:37.154505 | orchestrator | 2025-09-11 00:49:37.154516 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-11 00:49:37.154526 | orchestrator | Thursday 11 September 2025 00:48:54 +0000 (0:00:00.232) 0:01:35.635 **** 2025-09-11 00:49:37.154537 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:49:37.154548 | orchestrator | 2025-09-11 00:49:37.154558 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-11 00:49:37.154569 | orchestrator | Thursday 11 September 2025 00:48:56 +0000 (0:00:01.835) 0:01:37.470 **** 2025-09-11 00:49:37.154579 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:49:37.154590 | orchestrator | 2025-09-11 00:49:37.154601 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-11 00:49:37.154611 | orchestrator | 2025-09-11 00:49:37.154622 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-11 00:49:37.154633 | orchestrator | Thursday 11 September 2025 00:49:11 +0000 (0:00:15.008) 0:01:52.479 **** 2025-09-11 00:49:37.154643 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:49:37.154654 | orchestrator | 2025-09-11 00:49:37.154672 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-11 00:49:37.154697 | orchestrator | Thursday 11 September 2025 00:49:12 +0000 (0:00:00.621) 0:01:53.100 **** 2025-09-11 00:49:37.154708 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:49:37.154719 | orchestrator | 2025-09-11 00:49:37.154730 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-11 00:49:37.154741 | orchestrator | Thursday 11 September 2025 00:49:12 +0000 (0:00:00.354) 0:01:53.454 **** 2025-09-11 00:49:37.154752 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:49:37.154762 | orchestrator | 2025-09-11 00:49:37.154773 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-11 00:49:37.154784 | orchestrator | Thursday 11 September 2025 00:49:14 +0000 (0:00:01.673) 0:01:55.127 **** 2025-09-11 00:49:37.154795 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:49:37.154805 | orchestrator | 2025-09-11 00:49:37.154816 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-09-11 00:49:37.154827 | orchestrator | 2025-09-11 00:49:37.154838 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-09-11 00:49:37.154849 | orchestrator | Thursday 11 September 2025 00:49:30 +0000 (0:00:16.369) 0:02:11.497 **** 2025-09-11 00:49:37.154859 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:49:37.154870 | orchestrator | 2025-09-11 00:49:37.154881 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-09-11 00:49:37.154892 | orchestrator | Thursday 11 September 2025 00:49:31 +0000 (0:00:00.614) 0:02:12.112 **** 2025-09-11 00:49:37.154902 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-11 00:49:37.154913 | orchestrator | enable_outward_rabbitmq_True 2025-09-11 00:49:37.154924 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-11 00:49:37.154935 | orchestrator | outward_rabbitmq_restart 2025-09-11 00:49:37.154946 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:49:37.154956 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:49:37.154967 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:49:37.154978 | orchestrator | 2025-09-11 00:49:37.155005 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-09-11 00:49:37.155016 | orchestrator | skipping: no hosts matched 2025-09-11 00:49:37.155027 | orchestrator | 2025-09-11 00:49:37.155038 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-09-11 00:49:37.155049 | orchestrator | skipping: no hosts matched 2025-09-11 00:49:37.155059 | orchestrator | 2025-09-11 00:49:37.155070 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-09-11 00:49:37.155081 | orchestrator | skipping: no hosts matched 2025-09-11 00:49:37.155091 | orchestrator | 2025-09-11 00:49:37.155102 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:49:37.155113 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-11 00:49:37.155124 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-11 00:49:37.155135 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-11 00:49:37.155146 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-11 00:49:37.155157 | orchestrator | 2025-09-11 00:49:37.155167 | orchestrator | 2025-09-11 00:49:37.155183 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:49:37.155195 | orchestrator | Thursday 11 September 2025 00:49:34 +0000 (0:00:02.959) 0:02:15.071 **** 2025-09-11 00:49:37.155205 | orchestrator | =============================================================================== 2025-09-11 00:49:37.155216 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 86.52s 2025-09-11 00:49:37.155233 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 6.76s 2025-09-11 00:49:37.155244 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.57s 2025-09-11 00:49:37.155255 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.30s 2025-09-11 00:49:37.155266 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.96s 2025-09-11 00:49:37.155277 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.69s 2025-09-11 00:49:37.155287 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.19s 2025-09-11 00:49:37.155298 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.08s 2025-09-11 00:49:37.155308 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.06s 2025-09-11 00:49:37.155319 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.97s 2025-09-11 00:49:37.155330 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.93s 2025-09-11 00:49:37.155340 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.61s 2025-09-11 00:49:37.155351 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.52s 2025-09-11 00:49:37.155362 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.45s 2025-09-11 00:49:37.155372 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.13s 2025-09-11 00:49:37.155383 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.09s 2025-09-11 00:49:37.155394 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.91s 2025-09-11 00:49:37.155410 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.89s 2025-09-11 00:49:37.155421 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.79s 2025-09-11 00:49:37.155433 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.77s 2025-09-11 00:49:37.155443 | orchestrator | 2025-09-11 00:49:37 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:49:40.181935 | orchestrator | 2025-09-11 00:49:40 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:49:40.183069 | orchestrator | 2025-09-11 00:49:40 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:49:40.187675 | orchestrator | 2025-09-11 00:49:40 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:49:40.188876 | orchestrator | 2025-09-11 00:49:40 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:49:43.226479 | orchestrator | 2025-09-11 00:49:43 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:49:43.228217 | orchestrator | 2025-09-11 00:49:43 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:49:43.230276 | orchestrator | 2025-09-11 00:49:43 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:49:43.230391 | orchestrator | 2025-09-11 00:49:43 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:49:46.266756 | orchestrator | 2025-09-11 00:49:46 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:49:46.267964 | orchestrator | 2025-09-11 00:49:46 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:49:46.269118 | orchestrator | 2025-09-11 00:49:46 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:49:46.269144 | orchestrator | 2025-09-11 00:49:46 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:49:49.304366 | orchestrator | 2025-09-11 00:49:49 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:49:49.305032 | orchestrator | 2025-09-11 00:49:49 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:49:49.306433 | orchestrator | 2025-09-11 00:49:49 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:49:49.307121 | orchestrator | 2025-09-11 00:49:49 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:49:52.350234 | orchestrator | 2025-09-11 00:49:52 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:49:52.350321 | orchestrator | 2025-09-11 00:49:52 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:49:52.350374 | orchestrator | 2025-09-11 00:49:52 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:49:52.350404 | orchestrator | 2025-09-11 00:49:52 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:49:55.389338 | orchestrator | 2025-09-11 00:49:55 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:49:55.389784 | orchestrator | 2025-09-11 00:49:55 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:49:55.390840 | orchestrator | 2025-09-11 00:49:55 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:49:55.390876 | orchestrator | 2025-09-11 00:49:55 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:49:58.426457 | orchestrator | 2025-09-11 00:49:58 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:49:58.427268 | orchestrator | 2025-09-11 00:49:58 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:49:58.428979 | orchestrator | 2025-09-11 00:49:58 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:49:58.429086 | orchestrator | 2025-09-11 00:49:58 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:50:01.480919 | orchestrator | 2025-09-11 00:50:01 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:50:01.481533 | orchestrator | 2025-09-11 00:50:01 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:50:01.483186 | orchestrator | 2025-09-11 00:50:01 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:50:01.483459 | orchestrator | 2025-09-11 00:50:01 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:50:04.526713 | orchestrator | 2025-09-11 00:50:04 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:50:04.527463 | orchestrator | 2025-09-11 00:50:04 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:50:04.528448 | orchestrator | 2025-09-11 00:50:04 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:50:04.528861 | orchestrator | 2025-09-11 00:50:04 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:50:07.559362 | orchestrator | 2025-09-11 00:50:07 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:50:07.560051 | orchestrator | 2025-09-11 00:50:07 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:50:07.561835 | orchestrator | 2025-09-11 00:50:07 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:50:07.561859 | orchestrator | 2025-09-11 00:50:07 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:50:10.609368 | orchestrator | 2025-09-11 00:50:10 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:50:10.612605 | orchestrator | 2025-09-11 00:50:10 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:50:10.614284 | orchestrator | 2025-09-11 00:50:10 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:50:10.614612 | orchestrator | 2025-09-11 00:50:10 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:50:13.652287 | orchestrator | 2025-09-11 00:50:13 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:50:13.654368 | orchestrator | 2025-09-11 00:50:13 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:50:13.656602 | orchestrator | 2025-09-11 00:50:13 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:50:13.656843 | orchestrator | 2025-09-11 00:50:13 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:50:16.702244 | orchestrator | 2025-09-11 00:50:16 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:50:16.703148 | orchestrator | 2025-09-11 00:50:16 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:50:16.706284 | orchestrator | 2025-09-11 00:50:16 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:50:16.706325 | orchestrator | 2025-09-11 00:50:16 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:50:19.747749 | orchestrator | 2025-09-11 00:50:19 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:50:19.748911 | orchestrator | 2025-09-11 00:50:19 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:50:19.750535 | orchestrator | 2025-09-11 00:50:19 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state STARTED 2025-09-11 00:50:19.750905 | orchestrator | 2025-09-11 00:50:19 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:50:22.792913 | orchestrator | 2025-09-11 00:50:22 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:50:22.795688 | orchestrator | 2025-09-11 00:50:22 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:50:22.799751 | orchestrator | 2025-09-11 00:50:22 | INFO  | Task d78a1e8c-a4ad-41c4-828a-901ca935587f is in state SUCCESS 2025-09-11 00:50:22.802443 | orchestrator | 2025-09-11 00:50:22.802549 | orchestrator | 2025-09-11 00:50:22.802566 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-11 00:50:22.802579 | orchestrator | 2025-09-11 00:50:22.802591 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-11 00:50:22.802602 | orchestrator | Thursday 11 September 2025 00:48:10 +0000 (0:00:00.180) 0:00:00.180 **** 2025-09-11 00:50:22.802614 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:50:22.802626 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:50:22.802637 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:50:22.802648 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:50:22.802659 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:50:22.802670 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:50:22.802681 | orchestrator | 2025-09-11 00:50:22.802692 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-11 00:50:22.802703 | orchestrator | Thursday 11 September 2025 00:48:10 +0000 (0:00:00.513) 0:00:00.693 **** 2025-09-11 00:50:22.802714 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-09-11 00:50:22.802726 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-09-11 00:50:22.802737 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-09-11 00:50:22.802748 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-09-11 00:50:22.802759 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-09-11 00:50:22.802770 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-09-11 00:50:22.802781 | orchestrator | 2025-09-11 00:50:22.802816 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-09-11 00:50:22.802858 | orchestrator | 2025-09-11 00:50:22.802870 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-09-11 00:50:22.802881 | orchestrator | Thursday 11 September 2025 00:48:12 +0000 (0:00:01.298) 0:00:01.992 **** 2025-09-11 00:50:22.802893 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:50:22.802906 | orchestrator | 2025-09-11 00:50:22.802917 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-09-11 00:50:22.802928 | orchestrator | Thursday 11 September 2025 00:48:13 +0000 (0:00:01.202) 0:00:03.195 **** 2025-09-11 00:50:22.802941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.802956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.802967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.802979 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.803005 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.803018 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.803029 | orchestrator | 2025-09-11 00:50:22.803090 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-09-11 00:50:22.803103 | orchestrator | Thursday 11 September 2025 00:48:14 +0000 (0:00:01.181) 0:00:04.376 **** 2025-09-11 00:50:22.803116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.803140 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.803153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.803165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.803178 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.803191 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.803204 | orchestrator | 2025-09-11 00:50:22.803216 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-09-11 00:50:22.803229 | orchestrator | Thursday 11 September 2025 00:48:15 +0000 (0:00:01.514) 0:00:05.891 **** 2025-09-11 00:50:22.803242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.803256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.803359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.803382 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.803404 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.803417 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.803428 | orchestrator | 2025-09-11 00:50:22.803439 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-09-11 00:50:22.803450 | orchestrator | Thursday 11 September 2025 00:48:16 +0000 (0:00:00.964) 0:00:06.856 **** 2025-09-11 00:50:22.803462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.803473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.803484 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.803495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.803506 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.803523 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.803544 | orchestrator | 2025-09-11 00:50:22.803563 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-09-11 00:50:22.803575 | orchestrator | Thursday 11 September 2025 00:48:18 +0000 (0:00:01.495) 0:00:08.351 **** 2025-09-11 00:50:22.803587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.803598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.803610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.803621 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.803632 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.803643 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.803654 | orchestrator | 2025-09-11 00:50:22.803666 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-09-11 00:50:22.803676 | orchestrator | Thursday 11 September 2025 00:48:19 +0000 (0:00:01.442) 0:00:09.794 **** 2025-09-11 00:50:22.803687 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:50:22.803699 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:50:22.803709 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:50:22.803720 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:50:22.803731 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:50:22.803741 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:50:22.803752 | orchestrator | 2025-09-11 00:50:22.803763 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-09-11 00:50:22.803774 | orchestrator | Thursday 11 September 2025 00:48:22 +0000 (0:00:02.564) 0:00:12.359 **** 2025-09-11 00:50:22.803785 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-09-11 00:50:22.803803 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-09-11 00:50:22.803814 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-09-11 00:50:22.803825 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-09-11 00:50:22.803841 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-09-11 00:50:22.803852 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-09-11 00:50:22.803863 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-11 00:50:22.803874 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-11 00:50:22.803890 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-11 00:50:22.803902 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-11 00:50:22.803913 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-11 00:50:22.803924 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-11 00:50:22.803935 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-11 00:50:22.803947 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-11 00:50:22.803958 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-11 00:50:22.803969 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-11 00:50:22.803980 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-11 00:50:22.803990 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-11 00:50:22.804001 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-11 00:50:22.804013 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-11 00:50:22.804023 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-11 00:50:22.804069 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-11 00:50:22.804081 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-11 00:50:22.804092 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-11 00:50:22.804103 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-11 00:50:22.804114 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-11 00:50:22.804125 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-11 00:50:22.804135 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-11 00:50:22.804146 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-11 00:50:22.804157 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-11 00:50:22.804178 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-11 00:50:22.804189 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-11 00:50:22.804200 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-11 00:50:22.804211 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-11 00:50:22.804222 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-11 00:50:22.804232 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-11 00:50:22.804243 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-11 00:50:22.804254 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-11 00:50:22.804265 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-11 00:50:22.804276 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-11 00:50:22.804287 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-11 00:50:22.804303 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-11 00:50:22.804314 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-09-11 00:50:22.804325 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-09-11 00:50:22.804343 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-09-11 00:50:22.804354 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-09-11 00:50:22.804365 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-09-11 00:50:22.804376 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-09-11 00:50:22.804387 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-11 00:50:22.804398 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-11 00:50:22.804409 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-11 00:50:22.804420 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-11 00:50:22.804431 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-11 00:50:22.804442 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-11 00:50:22.804453 | orchestrator | 2025-09-11 00:50:22.804464 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-11 00:50:22.804475 | orchestrator | Thursday 11 September 2025 00:48:39 +0000 (0:00:17.293) 0:00:29.653 **** 2025-09-11 00:50:22.804486 | orchestrator | 2025-09-11 00:50:22.804496 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-11 00:50:22.804507 | orchestrator | Thursday 11 September 2025 00:48:39 +0000 (0:00:00.283) 0:00:29.936 **** 2025-09-11 00:50:22.804525 | orchestrator | 2025-09-11 00:50:22.804536 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-11 00:50:22.804547 | orchestrator | Thursday 11 September 2025 00:48:40 +0000 (0:00:00.095) 0:00:30.032 **** 2025-09-11 00:50:22.804557 | orchestrator | 2025-09-11 00:50:22.804568 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-11 00:50:22.804579 | orchestrator | Thursday 11 September 2025 00:48:40 +0000 (0:00:00.116) 0:00:30.148 **** 2025-09-11 00:50:22.804589 | orchestrator | 2025-09-11 00:50:22.804600 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-11 00:50:22.804611 | orchestrator | Thursday 11 September 2025 00:48:40 +0000 (0:00:00.086) 0:00:30.235 **** 2025-09-11 00:50:22.804622 | orchestrator | 2025-09-11 00:50:22.804632 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-11 00:50:22.804643 | orchestrator | Thursday 11 September 2025 00:48:40 +0000 (0:00:00.067) 0:00:30.302 **** 2025-09-11 00:50:22.804654 | orchestrator | 2025-09-11 00:50:22.804665 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-09-11 00:50:22.804675 | orchestrator | Thursday 11 September 2025 00:48:40 +0000 (0:00:00.074) 0:00:30.377 **** 2025-09-11 00:50:22.804686 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:50:22.804697 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:50:22.804708 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:50:22.804718 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:50:22.804729 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:50:22.804740 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:50:22.804750 | orchestrator | 2025-09-11 00:50:22.804761 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-09-11 00:50:22.804772 | orchestrator | Thursday 11 September 2025 00:48:41 +0000 (0:00:01.370) 0:00:31.747 **** 2025-09-11 00:50:22.804783 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:50:22.804793 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:50:22.804804 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:50:22.804815 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:50:22.804825 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:50:22.804836 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:50:22.804847 | orchestrator | 2025-09-11 00:50:22.804858 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-09-11 00:50:22.804869 | orchestrator | 2025-09-11 00:50:22.804880 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-11 00:50:22.804890 | orchestrator | Thursday 11 September 2025 00:49:11 +0000 (0:00:30.026) 0:01:01.774 **** 2025-09-11 00:50:22.804901 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:50:22.804913 | orchestrator | 2025-09-11 00:50:22.804923 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-11 00:50:22.804934 | orchestrator | Thursday 11 September 2025 00:49:12 +0000 (0:00:00.862) 0:01:02.636 **** 2025-09-11 00:50:22.804945 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:50:22.804956 | orchestrator | 2025-09-11 00:50:22.804972 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-09-11 00:50:22.804984 | orchestrator | Thursday 11 September 2025 00:49:13 +0000 (0:00:00.483) 0:01:03.120 **** 2025-09-11 00:50:22.804995 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:50:22.805006 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:50:22.805017 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:50:22.805027 | orchestrator | 2025-09-11 00:50:22.805097 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-09-11 00:50:22.805109 | orchestrator | Thursday 11 September 2025 00:49:14 +0000 (0:00:00.881) 0:01:04.002 **** 2025-09-11 00:50:22.805120 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:50:22.805131 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:50:22.805142 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:50:22.805159 | orchestrator | 2025-09-11 00:50:22.805179 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-09-11 00:50:22.805191 | orchestrator | Thursday 11 September 2025 00:49:14 +0000 (0:00:00.351) 0:01:04.353 **** 2025-09-11 00:50:22.805202 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:50:22.805214 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:50:22.805224 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:50:22.805235 | orchestrator | 2025-09-11 00:50:22.805246 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-09-11 00:50:22.805255 | orchestrator | Thursday 11 September 2025 00:49:14 +0000 (0:00:00.349) 0:01:04.703 **** 2025-09-11 00:50:22.805265 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:50:22.805275 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:50:22.805284 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:50:22.805294 | orchestrator | 2025-09-11 00:50:22.805304 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-09-11 00:50:22.805313 | orchestrator | Thursday 11 September 2025 00:49:15 +0000 (0:00:00.315) 0:01:05.019 **** 2025-09-11 00:50:22.805323 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:50:22.805332 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:50:22.805342 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:50:22.805352 | orchestrator | 2025-09-11 00:50:22.805361 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-09-11 00:50:22.805371 | orchestrator | Thursday 11 September 2025 00:49:15 +0000 (0:00:00.368) 0:01:05.388 **** 2025-09-11 00:50:22.805381 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:50:22.805390 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:50:22.805400 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:50:22.805410 | orchestrator | 2025-09-11 00:50:22.805420 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-09-11 00:50:22.805429 | orchestrator | Thursday 11 September 2025 00:49:15 +0000 (0:00:00.234) 0:01:05.622 **** 2025-09-11 00:50:22.805439 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:50:22.805449 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:50:22.805459 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:50:22.805468 | orchestrator | 2025-09-11 00:50:22.805478 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-09-11 00:50:22.805488 | orchestrator | Thursday 11 September 2025 00:49:15 +0000 (0:00:00.254) 0:01:05.877 **** 2025-09-11 00:50:22.805498 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:50:22.805508 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:50:22.805518 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:50:22.805528 | orchestrator | 2025-09-11 00:50:22.805537 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-09-11 00:50:22.805548 | orchestrator | Thursday 11 September 2025 00:49:16 +0000 (0:00:00.265) 0:01:06.142 **** 2025-09-11 00:50:22.805558 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:50:22.805567 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:50:22.805577 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:50:22.805587 | orchestrator | 2025-09-11 00:50:22.805596 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-09-11 00:50:22.805606 | orchestrator | Thursday 11 September 2025 00:49:16 +0000 (0:00:00.357) 0:01:06.500 **** 2025-09-11 00:50:22.805616 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:50:22.805626 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:50:22.805635 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:50:22.805645 | orchestrator | 2025-09-11 00:50:22.805654 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-09-11 00:50:22.805664 | orchestrator | Thursday 11 September 2025 00:49:16 +0000 (0:00:00.241) 0:01:06.741 **** 2025-09-11 00:50:22.805674 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:50:22.805684 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:50:22.805694 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:50:22.805704 | orchestrator | 2025-09-11 00:50:22.805714 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-09-11 00:50:22.805729 | orchestrator | Thursday 11 September 2025 00:49:17 +0000 (0:00:00.245) 0:01:06.987 **** 2025-09-11 00:50:22.805739 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:50:22.805749 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:50:22.805759 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:50:22.805769 | orchestrator | 2025-09-11 00:50:22.805779 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-09-11 00:50:22.805788 | orchestrator | Thursday 11 September 2025 00:49:17 +0000 (0:00:00.242) 0:01:07.229 **** 2025-09-11 00:50:22.805798 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:50:22.805808 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:50:22.805818 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:50:22.805828 | orchestrator | 2025-09-11 00:50:22.805837 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-09-11 00:50:22.805847 | orchestrator | Thursday 11 September 2025 00:49:17 +0000 (0:00:00.227) 0:01:07.457 **** 2025-09-11 00:50:22.805857 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:50:22.805867 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:50:22.805877 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:50:22.805886 | orchestrator | 2025-09-11 00:50:22.805896 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-09-11 00:50:22.805906 | orchestrator | Thursday 11 September 2025 00:49:17 +0000 (0:00:00.365) 0:01:07.822 **** 2025-09-11 00:50:22.805915 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:50:22.805925 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:50:22.805934 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:50:22.805944 | orchestrator | 2025-09-11 00:50:22.805954 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-09-11 00:50:22.805968 | orchestrator | Thursday 11 September 2025 00:49:18 +0000 (0:00:00.249) 0:01:08.072 **** 2025-09-11 00:50:22.805978 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:50:22.805988 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:50:22.805998 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:50:22.806007 | orchestrator | 2025-09-11 00:50:22.806081 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-09-11 00:50:22.806095 | orchestrator | Thursday 11 September 2025 00:49:18 +0000 (0:00:00.278) 0:01:08.350 **** 2025-09-11 00:50:22.806105 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:50:22.806114 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:50:22.806131 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:50:22.806141 | orchestrator | 2025-09-11 00:50:22.806151 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-11 00:50:22.806161 | orchestrator | Thursday 11 September 2025 00:49:18 +0000 (0:00:00.251) 0:01:08.602 **** 2025-09-11 00:50:22.806171 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:50:22.806180 | orchestrator | 2025-09-11 00:50:22.806190 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-09-11 00:50:22.806200 | orchestrator | Thursday 11 September 2025 00:49:19 +0000 (0:00:00.781) 0:01:09.383 **** 2025-09-11 00:50:22.806217 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:50:22.806233 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:50:22.806251 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:50:22.806268 | orchestrator | 2025-09-11 00:50:22.806284 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-09-11 00:50:22.806301 | orchestrator | Thursday 11 September 2025 00:49:19 +0000 (0:00:00.536) 0:01:09.919 **** 2025-09-11 00:50:22.806317 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:50:22.806332 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:50:22.806348 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:50:22.806364 | orchestrator | 2025-09-11 00:50:22.806380 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-09-11 00:50:22.806394 | orchestrator | Thursday 11 September 2025 00:49:20 +0000 (0:00:00.442) 0:01:10.362 **** 2025-09-11 00:50:22.806422 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:50:22.806437 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:50:22.806452 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:50:22.806467 | orchestrator | 2025-09-11 00:50:22.806484 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-09-11 00:50:22.806502 | orchestrator | Thursday 11 September 2025 00:49:21 +0000 (0:00:00.628) 0:01:10.991 **** 2025-09-11 00:50:22.806518 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:50:22.806533 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:50:22.806543 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:50:22.806553 | orchestrator | 2025-09-11 00:50:22.806563 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-09-11 00:50:22.806572 | orchestrator | Thursday 11 September 2025 00:49:21 +0000 (0:00:00.400) 0:01:11.391 **** 2025-09-11 00:50:22.806582 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:50:22.806591 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:50:22.806600 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:50:22.806610 | orchestrator | 2025-09-11 00:50:22.806619 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-09-11 00:50:22.806629 | orchestrator | Thursday 11 September 2025 00:49:21 +0000 (0:00:00.242) 0:01:11.634 **** 2025-09-11 00:50:22.806646 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:50:22.806662 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:50:22.806678 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:50:22.806693 | orchestrator | 2025-09-11 00:50:22.806710 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-09-11 00:50:22.806727 | orchestrator | Thursday 11 September 2025 00:49:21 +0000 (0:00:00.288) 0:01:11.923 **** 2025-09-11 00:50:22.806741 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:50:22.806751 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:50:22.806760 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:50:22.806770 | orchestrator | 2025-09-11 00:50:22.806779 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-09-11 00:50:22.806789 | orchestrator | Thursday 11 September 2025 00:49:22 +0000 (0:00:00.384) 0:01:12.308 **** 2025-09-11 00:50:22.806799 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:50:22.806808 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:50:22.806817 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:50:22.806827 | orchestrator | 2025-09-11 00:50:22.806836 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-11 00:50:22.806846 | orchestrator | Thursday 11 September 2025 00:49:22 +0000 (0:00:00.316) 0:01:12.624 **** 2025-09-11 00:50:22.806857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.806869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.806891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.806910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.806932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.806942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.806952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.806962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.806972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.806982 | orchestrator | 2025-09-11 00:50:22.806992 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-11 00:50:22.807002 | orchestrator | Thursday 11 September 2025 00:49:24 +0000 (0:00:01.558) 0:01:14.182 **** 2025-09-11 00:50:22.807012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.807022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.807053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.807071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.807094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.807105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.807115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.807125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.807135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.807145 | orchestrator | 2025-09-11 00:50:22.807155 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-11 00:50:22.807165 | orchestrator | Thursday 11 September 2025 00:49:28 +0000 (0:00:03.827) 0:01:18.009 **** 2025-09-11 00:50:22.807175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.807185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.807195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.807205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.807226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.807242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.807252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.807262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.807273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.807282 | orchestrator | 2025-09-11 00:50:22.807292 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-11 00:50:22.807302 | orchestrator | Thursday 11 September 2025 00:49:30 +0000 (0:00:02.446) 0:01:20.455 **** 2025-09-11 00:50:22.807312 | orchestrator | 2025-09-11 00:50:22.807322 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-11 00:50:22.807331 | orchestrator | Thursday 11 September 2025 00:49:30 +0000 (0:00:00.139) 0:01:20.595 **** 2025-09-11 00:50:22.807342 | orchestrator | 2025-09-11 00:50:22.807351 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-11 00:50:22.807361 | orchestrator | Thursday 11 September 2025 00:49:30 +0000 (0:00:00.160) 0:01:20.755 **** 2025-09-11 00:50:22.807371 | orchestrator | 2025-09-11 00:50:22.807381 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-11 00:50:22.807392 | orchestrator | Thursday 11 September 2025 00:49:30 +0000 (0:00:00.148) 0:01:20.903 **** 2025-09-11 00:50:22.807408 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:50:22.807425 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:50:22.807441 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:50:22.807457 | orchestrator | 2025-09-11 00:50:22.807472 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-11 00:50:22.807487 | orchestrator | Thursday 11 September 2025 00:49:38 +0000 (0:00:07.721) 0:01:28.625 **** 2025-09-11 00:50:22.807503 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:50:22.807520 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:50:22.807538 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:50:22.807551 | orchestrator | 2025-09-11 00:50:22.807561 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-11 00:50:22.807578 | orchestrator | Thursday 11 September 2025 00:49:41 +0000 (0:00:02.509) 0:01:31.135 **** 2025-09-11 00:50:22.807588 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:50:22.807598 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:50:22.807607 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:50:22.807617 | orchestrator | 2025-09-11 00:50:22.807626 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-11 00:50:22.807636 | orchestrator | Thursday 11 September 2025 00:49:43 +0000 (0:00:02.829) 0:01:33.965 **** 2025-09-11 00:50:22.807645 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:50:22.807655 | orchestrator | 2025-09-11 00:50:22.807664 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-11 00:50:22.807674 | orchestrator | Thursday 11 September 2025 00:49:44 +0000 (0:00:00.351) 0:01:34.317 **** 2025-09-11 00:50:22.807683 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:50:22.807693 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:50:22.807702 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:50:22.807712 | orchestrator | 2025-09-11 00:50:22.807722 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-11 00:50:22.807731 | orchestrator | Thursday 11 September 2025 00:49:45 +0000 (0:00:00.976) 0:01:35.293 **** 2025-09-11 00:50:22.807741 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:50:22.807750 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:50:22.807760 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:50:22.807769 | orchestrator | 2025-09-11 00:50:22.807779 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-11 00:50:22.807789 | orchestrator | Thursday 11 September 2025 00:49:45 +0000 (0:00:00.603) 0:01:35.896 **** 2025-09-11 00:50:22.807798 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:50:22.807808 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:50:22.807818 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:50:22.807827 | orchestrator | 2025-09-11 00:50:22.807842 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-11 00:50:22.807853 | orchestrator | Thursday 11 September 2025 00:49:46 +0000 (0:00:00.757) 0:01:36.653 **** 2025-09-11 00:50:22.807862 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:50:22.807872 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:50:22.807881 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:50:22.807891 | orchestrator | 2025-09-11 00:50:22.807900 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-11 00:50:22.807910 | orchestrator | Thursday 11 September 2025 00:49:47 +0000 (0:00:00.665) 0:01:37.319 **** 2025-09-11 00:50:22.807920 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:50:22.807929 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:50:22.807946 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:50:22.807956 | orchestrator | 2025-09-11 00:50:22.807966 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-11 00:50:22.807976 | orchestrator | Thursday 11 September 2025 00:49:48 +0000 (0:00:01.040) 0:01:38.360 **** 2025-09-11 00:50:22.807985 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:50:22.807995 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:50:22.808004 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:50:22.808014 | orchestrator | 2025-09-11 00:50:22.808023 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-09-11 00:50:22.808094 | orchestrator | Thursday 11 September 2025 00:49:49 +0000 (0:00:00.718) 0:01:39.078 **** 2025-09-11 00:50:22.808107 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:50:22.808116 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:50:22.808126 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:50:22.808135 | orchestrator | 2025-09-11 00:50:22.808145 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-11 00:50:22.808155 | orchestrator | Thursday 11 September 2025 00:49:49 +0000 (0:00:00.309) 0:01:39.388 **** 2025-09-11 00:50:22.808165 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.808179 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.808188 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.808196 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.808205 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.808213 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.808222 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.808234 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.808249 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.808257 | orchestrator | 2025-09-11 00:50:22.808265 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-11 00:50:22.808273 | orchestrator | Thursday 11 September 2025 00:49:50 +0000 (0:00:01.485) 0:01:40.873 **** 2025-09-11 00:50:22.808281 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.808295 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.808303 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.808311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.808319 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.808327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.808335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.808344 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.808356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.808364 | orchestrator | 2025-09-11 00:50:22.808372 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-11 00:50:22.808380 | orchestrator | Thursday 11 September 2025 00:49:55 +0000 (0:00:04.476) 0:01:45.350 **** 2025-09-11 00:50:22.808392 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.808406 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.808414 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.808423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.808431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.808439 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.808447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.808455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.808463 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 00:50:22.808471 | orchestrator | 2025-09-11 00:50:22.808479 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-11 00:50:22.808487 | orchestrator | Thursday 11 September 2025 00:49:58 +0000 (0:00:03.033) 0:01:48.383 **** 2025-09-11 00:50:22.808495 | orchestrator | 2025-09-11 00:50:22.808503 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-11 00:50:22.808515 | orchestrator | Thursday 11 September 2025 00:49:58 +0000 (0:00:00.066) 0:01:48.449 **** 2025-09-11 00:50:22.808523 | orchestrator | 2025-09-11 00:50:22.808531 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-11 00:50:22.808539 | orchestrator | Thursday 11 September 2025 00:49:58 +0000 (0:00:00.063) 0:01:48.513 **** 2025-09-11 00:50:22.808552 | orchestrator | 2025-09-11 00:50:22.808560 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-11 00:50:22.808568 | orchestrator | Thursday 11 September 2025 00:49:58 +0000 (0:00:00.059) 0:01:48.572 **** 2025-09-11 00:50:22.808576 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:50:22.808583 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:50:22.808591 | orchestrator | 2025-09-11 00:50:22.808603 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-11 00:50:22.808611 | orchestrator | Thursday 11 September 2025 00:50:04 +0000 (0:00:06.145) 0:01:54.718 **** 2025-09-11 00:50:22.808619 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:50:22.808627 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:50:22.808635 | orchestrator | 2025-09-11 00:50:22.808643 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-11 00:50:22.808651 | orchestrator | Thursday 11 September 2025 00:50:11 +0000 (0:00:06.343) 0:02:01.061 **** 2025-09-11 00:50:22.808659 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:50:22.808666 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:50:22.808674 | orchestrator | 2025-09-11 00:50:22.808682 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-11 00:50:22.808690 | orchestrator | Thursday 11 September 2025 00:50:17 +0000 (0:00:06.291) 0:02:07.353 **** 2025-09-11 00:50:22.808698 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:50:22.808705 | orchestrator | 2025-09-11 00:50:22.808713 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-11 00:50:22.808721 | orchestrator | Thursday 11 September 2025 00:50:17 +0000 (0:00:00.153) 0:02:07.506 **** 2025-09-11 00:50:22.808729 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:50:22.808737 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:50:22.808744 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:50:22.808752 | orchestrator | 2025-09-11 00:50:22.808760 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-11 00:50:22.808768 | orchestrator | Thursday 11 September 2025 00:50:18 +0000 (0:00:00.818) 0:02:08.325 **** 2025-09-11 00:50:22.808776 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:50:22.808784 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:50:22.808792 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:50:22.808799 | orchestrator | 2025-09-11 00:50:22.808807 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-11 00:50:22.808815 | orchestrator | Thursday 11 September 2025 00:50:18 +0000 (0:00:00.629) 0:02:08.954 **** 2025-09-11 00:50:22.808823 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:50:22.808831 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:50:22.808839 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:50:22.808846 | orchestrator | 2025-09-11 00:50:22.808854 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-11 00:50:22.808862 | orchestrator | Thursday 11 September 2025 00:50:19 +0000 (0:00:00.773) 0:02:09.728 **** 2025-09-11 00:50:22.808870 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:50:22.808878 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:50:22.808885 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:50:22.808893 | orchestrator | 2025-09-11 00:50:22.808901 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-11 00:50:22.808909 | orchestrator | Thursday 11 September 2025 00:50:20 +0000 (0:00:00.673) 0:02:10.402 **** 2025-09-11 00:50:22.808917 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:50:22.808925 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:50:22.808932 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:50:22.808940 | orchestrator | 2025-09-11 00:50:22.808948 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-11 00:50:22.808956 | orchestrator | Thursday 11 September 2025 00:50:21 +0000 (0:00:00.705) 0:02:11.107 **** 2025-09-11 00:50:22.808964 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:50:22.808972 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:50:22.808989 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:50:22.808997 | orchestrator | 2025-09-11 00:50:22.809005 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:50:22.809013 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-11 00:50:22.809021 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-11 00:50:22.809029 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-11 00:50:22.809058 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:50:22.809072 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:50:22.809085 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:50:22.809097 | orchestrator | 2025-09-11 00:50:22.809105 | orchestrator | 2025-09-11 00:50:22.809113 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:50:22.809121 | orchestrator | Thursday 11 September 2025 00:50:22 +0000 (0:00:00.891) 0:02:11.999 **** 2025-09-11 00:50:22.809129 | orchestrator | =============================================================================== 2025-09-11 00:50:22.809137 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 30.03s 2025-09-11 00:50:22.809149 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 17.29s 2025-09-11 00:50:22.809158 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.87s 2025-09-11 00:50:22.809165 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 9.12s 2025-09-11 00:50:22.809173 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 8.85s 2025-09-11 00:50:22.809181 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.48s 2025-09-11 00:50:22.809189 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.83s 2025-09-11 00:50:22.809201 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.03s 2025-09-11 00:50:22.809209 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.56s 2025-09-11 00:50:22.809217 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.45s 2025-09-11 00:50:22.809225 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.56s 2025-09-11 00:50:22.809232 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.51s 2025-09-11 00:50:22.809240 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.50s 2025-09-11 00:50:22.809248 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.49s 2025-09-11 00:50:22.809256 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.44s 2025-09-11 00:50:22.809264 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.37s 2025-09-11 00:50:22.809271 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.30s 2025-09-11 00:50:22.809279 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.20s 2025-09-11 00:50:22.809287 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.18s 2025-09-11 00:50:22.809295 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.04s 2025-09-11 00:50:22.809303 | orchestrator | 2025-09-11 00:50:22 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:50:25.855581 | orchestrator | 2025-09-11 00:50:25 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:50:25.856910 | orchestrator | 2025-09-11 00:50:25 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:50:25.859087 | orchestrator | 2025-09-11 00:50:25 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:50:28.897250 | orchestrator | 2025-09-11 00:50:28 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:50:28.898766 | orchestrator | 2025-09-11 00:50:28 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:50:28.898986 | orchestrator | 2025-09-11 00:50:28 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:50:31.938949 | orchestrator | 2025-09-11 00:50:31 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:50:31.940478 | orchestrator | 2025-09-11 00:50:31 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:50:31.940812 | orchestrator | 2025-09-11 00:50:31 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:50:34.980388 | orchestrator | 2025-09-11 00:50:34 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:50:34.980468 | orchestrator | 2025-09-11 00:50:34 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:50:34.980481 | orchestrator | 2025-09-11 00:50:34 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:50:38.014184 | orchestrator | 2025-09-11 00:50:38 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:50:38.014643 | orchestrator | 2025-09-11 00:50:38 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:50:38.014673 | orchestrator | 2025-09-11 00:50:38 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:50:41.045200 | orchestrator | 2025-09-11 00:50:41 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:50:41.046683 | orchestrator | 2025-09-11 00:50:41 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:50:41.046727 | orchestrator | 2025-09-11 00:50:41 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:50:44.083762 | orchestrator | 2025-09-11 00:50:44 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:50:44.083866 | orchestrator | 2025-09-11 00:50:44 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:50:44.083879 | orchestrator | 2025-09-11 00:50:44 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:50:47.108268 | orchestrator | 2025-09-11 00:50:47 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:50:47.109572 | orchestrator | 2025-09-11 00:50:47 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:50:47.109599 | orchestrator | 2025-09-11 00:50:47 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:50:50.139192 | orchestrator | 2025-09-11 00:50:50 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:50:50.140038 | orchestrator | 2025-09-11 00:50:50 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:50:50.140085 | orchestrator | 2025-09-11 00:50:50 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:50:53.180887 | orchestrator | 2025-09-11 00:50:53 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:50:53.181221 | orchestrator | 2025-09-11 00:50:53 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:50:53.181250 | orchestrator | 2025-09-11 00:50:53 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:50:56.213284 | orchestrator | 2025-09-11 00:50:56 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:50:56.215475 | orchestrator | 2025-09-11 00:50:56 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:50:56.215508 | orchestrator | 2025-09-11 00:50:56 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:50:59.265482 | orchestrator | 2025-09-11 00:50:59 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:50:59.265599 | orchestrator | 2025-09-11 00:50:59 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:50:59.265614 | orchestrator | 2025-09-11 00:50:59 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:51:02.299964 | orchestrator | 2025-09-11 00:51:02 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:51:02.301318 | orchestrator | 2025-09-11 00:51:02 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:51:02.301528 | orchestrator | 2025-09-11 00:51:02 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:51:05.344666 | orchestrator | 2025-09-11 00:51:05 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:51:05.346447 | orchestrator | 2025-09-11 00:51:05 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:51:05.346749 | orchestrator | 2025-09-11 00:51:05 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:51:08.390180 | orchestrator | 2025-09-11 00:51:08 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:51:08.390291 | orchestrator | 2025-09-11 00:51:08 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:51:08.390308 | orchestrator | 2025-09-11 00:51:08 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:51:11.425524 | orchestrator | 2025-09-11 00:51:11 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:51:11.427883 | orchestrator | 2025-09-11 00:51:11 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:51:11.428400 | orchestrator | 2025-09-11 00:51:11 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:51:14.480693 | orchestrator | 2025-09-11 00:51:14 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:51:14.483980 | orchestrator | 2025-09-11 00:51:14 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:51:14.484246 | orchestrator | 2025-09-11 00:51:14 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:51:17.530373 | orchestrator | 2025-09-11 00:51:17 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:51:17.531878 | orchestrator | 2025-09-11 00:51:17 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:51:17.531927 | orchestrator | 2025-09-11 00:51:17 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:51:20.566371 | orchestrator | 2025-09-11 00:51:20 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:51:20.566474 | orchestrator | 2025-09-11 00:51:20 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:51:20.566488 | orchestrator | 2025-09-11 00:51:20 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:51:23.603839 | orchestrator | 2025-09-11 00:51:23 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:51:23.605413 | orchestrator | 2025-09-11 00:51:23 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:51:23.605838 | orchestrator | 2025-09-11 00:51:23 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:51:26.650336 | orchestrator | 2025-09-11 00:51:26 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:51:26.651274 | orchestrator | 2025-09-11 00:51:26 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:51:26.651314 | orchestrator | 2025-09-11 00:51:26 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:51:29.706002 | orchestrator | 2025-09-11 00:51:29 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:51:29.707236 | orchestrator | 2025-09-11 00:51:29 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:51:29.707555 | orchestrator | 2025-09-11 00:51:29 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:51:32.751213 | orchestrator | 2025-09-11 00:51:32 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:51:32.752706 | orchestrator | 2025-09-11 00:51:32 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:51:32.752791 | orchestrator | 2025-09-11 00:51:32 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:51:35.799397 | orchestrator | 2025-09-11 00:51:35 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:51:35.800303 | orchestrator | 2025-09-11 00:51:35 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:51:35.800594 | orchestrator | 2025-09-11 00:51:35 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:51:38.848147 | orchestrator | 2025-09-11 00:51:38 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:51:38.848416 | orchestrator | 2025-09-11 00:51:38 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:51:38.848443 | orchestrator | 2025-09-11 00:51:38 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:51:41.892962 | orchestrator | 2025-09-11 00:51:41 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:51:41.893731 | orchestrator | 2025-09-11 00:51:41 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:51:41.893761 | orchestrator | 2025-09-11 00:51:41 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:51:44.945999 | orchestrator | 2025-09-11 00:51:44 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:51:44.947294 | orchestrator | 2025-09-11 00:51:44 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:51:44.947355 | orchestrator | 2025-09-11 00:51:44 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:51:47.989526 | orchestrator | 2025-09-11 00:51:47 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:51:47.990916 | orchestrator | 2025-09-11 00:51:47 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:51:47.990937 | orchestrator | 2025-09-11 00:51:47 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:51:51.045451 | orchestrator | 2025-09-11 00:51:51 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:51:51.046855 | orchestrator | 2025-09-11 00:51:51 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:51:51.046893 | orchestrator | 2025-09-11 00:51:51 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:51:54.085558 | orchestrator | 2025-09-11 00:51:54 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:51:54.088550 | orchestrator | 2025-09-11 00:51:54 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:51:54.088584 | orchestrator | 2025-09-11 00:51:54 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:51:57.132951 | orchestrator | 2025-09-11 00:51:57 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:51:57.133842 | orchestrator | 2025-09-11 00:51:57 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:51:57.135594 | orchestrator | 2025-09-11 00:51:57 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:52:00.171437 | orchestrator | 2025-09-11 00:52:00 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:52:00.171725 | orchestrator | 2025-09-11 00:52:00 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:52:00.171782 | orchestrator | 2025-09-11 00:52:00 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:52:03.209905 | orchestrator | 2025-09-11 00:52:03 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:52:03.212956 | orchestrator | 2025-09-11 00:52:03 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:52:03.213027 | orchestrator | 2025-09-11 00:52:03 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:52:06.254874 | orchestrator | 2025-09-11 00:52:06 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:52:06.255897 | orchestrator | 2025-09-11 00:52:06 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:52:06.255926 | orchestrator | 2025-09-11 00:52:06 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:52:09.293147 | orchestrator | 2025-09-11 00:52:09 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:52:09.294607 | orchestrator | 2025-09-11 00:52:09 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:52:09.295449 | orchestrator | 2025-09-11 00:52:09 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:52:12.331915 | orchestrator | 2025-09-11 00:52:12 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:52:12.333321 | orchestrator | 2025-09-11 00:52:12 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:52:12.333352 | orchestrator | 2025-09-11 00:52:12 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:52:15.379244 | orchestrator | 2025-09-11 00:52:15 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:52:15.379602 | orchestrator | 2025-09-11 00:52:15 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:52:15.379632 | orchestrator | 2025-09-11 00:52:15 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:52:18.420245 | orchestrator | 2025-09-11 00:52:18 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:52:18.422352 | orchestrator | 2025-09-11 00:52:18 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:52:18.422404 | orchestrator | 2025-09-11 00:52:18 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:52:21.464476 | orchestrator | 2025-09-11 00:52:21 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:52:21.465710 | orchestrator | 2025-09-11 00:52:21 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:52:21.465738 | orchestrator | 2025-09-11 00:52:21 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:52:24.504817 | orchestrator | 2025-09-11 00:52:24 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:52:24.504931 | orchestrator | 2025-09-11 00:52:24 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:52:24.504948 | orchestrator | 2025-09-11 00:52:24 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:52:27.548613 | orchestrator | 2025-09-11 00:52:27 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:52:27.548721 | orchestrator | 2025-09-11 00:52:27 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:52:27.548737 | orchestrator | 2025-09-11 00:52:27 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:52:30.595348 | orchestrator | 2025-09-11 00:52:30 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:52:30.595699 | orchestrator | 2025-09-11 00:52:30 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:52:30.595728 | orchestrator | 2025-09-11 00:52:30 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:52:33.633302 | orchestrator | 2025-09-11 00:52:33 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:52:33.633523 | orchestrator | 2025-09-11 00:52:33 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:52:33.633544 | orchestrator | 2025-09-11 00:52:33 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:52:36.677506 | orchestrator | 2025-09-11 00:52:36 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:52:36.678814 | orchestrator | 2025-09-11 00:52:36 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:52:36.678851 | orchestrator | 2025-09-11 00:52:36 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:52:39.720199 | orchestrator | 2025-09-11 00:52:39 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:52:39.721557 | orchestrator | 2025-09-11 00:52:39 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:52:39.721588 | orchestrator | 2025-09-11 00:52:39 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:52:42.770728 | orchestrator | 2025-09-11 00:52:42 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:52:42.772771 | orchestrator | 2025-09-11 00:52:42 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:52:42.772807 | orchestrator | 2025-09-11 00:52:42 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:52:45.814927 | orchestrator | 2025-09-11 00:52:45 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:52:45.816120 | orchestrator | 2025-09-11 00:52:45 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:52:45.816169 | orchestrator | 2025-09-11 00:52:45 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:52:48.862332 | orchestrator | 2025-09-11 00:52:48 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:52:48.864359 | orchestrator | 2025-09-11 00:52:48 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:52:48.864656 | orchestrator | 2025-09-11 00:52:48 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:52:51.905517 | orchestrator | 2025-09-11 00:52:51 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:52:51.905622 | orchestrator | 2025-09-11 00:52:51 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:52:51.905670 | orchestrator | 2025-09-11 00:52:51 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:52:54.942845 | orchestrator | 2025-09-11 00:52:54 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state STARTED 2025-09-11 00:52:54.943609 | orchestrator | 2025-09-11 00:52:54 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:52:54.943641 | orchestrator | 2025-09-11 00:52:54 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:52:57.989853 | orchestrator | 2025-09-11 00:52:57 | INFO  | Task e6b9aa46-35fc-46a4-b31e-be248576449a is in state SUCCESS 2025-09-11 00:52:57.991190 | orchestrator | 2025-09-11 00:52:57.991236 | orchestrator | 2025-09-11 00:52:57.991249 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-11 00:52:57.991262 | orchestrator | 2025-09-11 00:52:57.991273 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-11 00:52:57.991284 | orchestrator | Thursday 11 September 2025 00:47:02 +0000 (0:00:00.257) 0:00:00.257 **** 2025-09-11 00:52:57.991295 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:52:57.991307 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:52:57.991317 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:52:57.991329 | orchestrator | 2025-09-11 00:52:57.994303 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-11 00:52:57.994325 | orchestrator | Thursday 11 September 2025 00:47:02 +0000 (0:00:00.308) 0:00:00.566 **** 2025-09-11 00:52:57.994337 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-09-11 00:52:57.994348 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-09-11 00:52:57.994359 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-09-11 00:52:57.994370 | orchestrator | 2025-09-11 00:52:57.994382 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-09-11 00:52:57.994393 | orchestrator | 2025-09-11 00:52:57.994403 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-11 00:52:57.994414 | orchestrator | Thursday 11 September 2025 00:47:02 +0000 (0:00:00.368) 0:00:00.934 **** 2025-09-11 00:52:57.994425 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:52:57.994437 | orchestrator | 2025-09-11 00:52:57.994448 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-09-11 00:52:57.994459 | orchestrator | Thursday 11 September 2025 00:47:03 +0000 (0:00:00.464) 0:00:01.399 **** 2025-09-11 00:52:57.994469 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:52:57.994481 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:52:57.994491 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:52:57.994502 | orchestrator | 2025-09-11 00:52:57.994513 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-11 00:52:57.994523 | orchestrator | Thursday 11 September 2025 00:47:03 +0000 (0:00:00.699) 0:00:02.098 **** 2025-09-11 00:52:57.994534 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:52:57.994545 | orchestrator | 2025-09-11 00:52:57.994556 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-09-11 00:52:57.994567 | orchestrator | Thursday 11 September 2025 00:47:04 +0000 (0:00:00.909) 0:00:03.008 **** 2025-09-11 00:52:57.994578 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:52:57.994588 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:52:57.994599 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:52:57.994610 | orchestrator | 2025-09-11 00:52:57.994621 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-09-11 00:52:57.994631 | orchestrator | Thursday 11 September 2025 00:47:05 +0000 (0:00:00.828) 0:00:03.838 **** 2025-09-11 00:52:57.994663 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-11 00:52:57.994674 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-11 00:52:57.994705 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-11 00:52:57.994716 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-11 00:52:57.994727 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-11 00:52:57.994738 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-11 00:52:57.994749 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-11 00:52:57.994760 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-11 00:52:57.994771 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-11 00:52:57.994782 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-11 00:52:57.994792 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-11 00:52:57.994803 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-11 00:52:57.994813 | orchestrator | 2025-09-11 00:52:57.994824 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-11 00:52:57.994835 | orchestrator | Thursday 11 September 2025 00:47:08 +0000 (0:00:02.536) 0:00:06.374 **** 2025-09-11 00:52:57.994845 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-11 00:52:57.994856 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-11 00:52:57.994867 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-11 00:52:57.994878 | orchestrator | 2025-09-11 00:52:57.994889 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-11 00:52:57.994900 | orchestrator | Thursday 11 September 2025 00:47:08 +0000 (0:00:00.763) 0:00:07.137 **** 2025-09-11 00:52:57.994910 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-11 00:52:57.994921 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-11 00:52:57.994932 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-11 00:52:57.994943 | orchestrator | 2025-09-11 00:52:57.994953 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-11 00:52:57.994964 | orchestrator | Thursday 11 September 2025 00:47:10 +0000 (0:00:01.645) 0:00:08.782 **** 2025-09-11 00:52:57.994975 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-09-11 00:52:57.994986 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:57.995015 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-09-11 00:52:57.995026 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:57.995037 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-09-11 00:52:57.995048 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:57.995058 | orchestrator | 2025-09-11 00:52:57.995069 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-09-11 00:52:57.995080 | orchestrator | Thursday 11 September 2025 00:47:11 +0000 (0:00:00.551) 0:00:09.334 **** 2025-09-11 00:52:57.995119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-11 00:52:57.995137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-11 00:52:57.995162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-11 00:52:57.995174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-11 00:52:57.995185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-11 00:52:57.995207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-11 00:52:57.995220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-11 00:52:57.995231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-11 00:52:57.995248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-11 00:52:57.995260 | orchestrator | 2025-09-11 00:52:57.995271 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-09-11 00:52:57.995282 | orchestrator | Thursday 11 September 2025 00:47:14 +0000 (0:00:02.909) 0:00:12.244 **** 2025-09-11 00:52:57.995293 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:52:57.995303 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:52:57.995314 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:52:57.995325 | orchestrator | 2025-09-11 00:52:57.995335 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-09-11 00:52:57.995346 | orchestrator | Thursday 11 September 2025 00:47:15 +0000 (0:00:01.433) 0:00:13.678 **** 2025-09-11 00:52:57.995361 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-09-11 00:52:57.995373 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-09-11 00:52:57.995383 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-09-11 00:52:57.995394 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-09-11 00:52:57.995405 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-09-11 00:52:57.995415 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-09-11 00:52:57.995426 | orchestrator | 2025-09-11 00:52:57.995436 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-09-11 00:52:57.995447 | orchestrator | Thursday 11 September 2025 00:47:17 +0000 (0:00:02.408) 0:00:16.086 **** 2025-09-11 00:52:57.995458 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:52:57.995468 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:52:57.995479 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:52:57.995489 | orchestrator | 2025-09-11 00:52:57.995500 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-09-11 00:52:57.995511 | orchestrator | Thursday 11 September 2025 00:47:18 +0000 (0:00:00.946) 0:00:17.032 **** 2025-09-11 00:52:57.995521 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:52:57.995532 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:52:57.995543 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:52:57.995553 | orchestrator | 2025-09-11 00:52:57.995564 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-09-11 00:52:57.995574 | orchestrator | Thursday 11 September 2025 00:47:21 +0000 (0:00:02.210) 0:00:19.243 **** 2025-09-11 00:52:57.995586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-11 00:52:57.995605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-11 00:52:57.995622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-11 00:52:57.995634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c0ffe31263e1ad5f97fcbfe2b76ac2d1b20eb9f0', '__omit_place_holder__c0ffe31263e1ad5f97fcbfe2b76ac2d1b20eb9f0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-11 00:52:57.995646 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:57.995667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-11 00:52:57.995679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-11 00:52:57.995690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-11 00:52:57.995702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c0ffe31263e1ad5f97fcbfe2b76ac2d1b20eb9f0', '__omit_place_holder__c0ffe31263e1ad5f97fcbfe2b76ac2d1b20eb9f0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-11 00:52:57.995719 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:57.995738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-11 00:52:57.995750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-11 00:52:57.995762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-11 00:52:57.995777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c0ffe31263e1ad5f97fcbfe2b76ac2d1b20eb9f0', '__omit_place_holder__c0ffe31263e1ad5f97fcbfe2b76ac2d1b20eb9f0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-11 00:52:57.995789 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:57.995799 | orchestrator | 2025-09-11 00:52:57.995810 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-09-11 00:52:57.995821 | orchestrator | Thursday 11 September 2025 00:47:22 +0000 (0:00:01.030) 0:00:20.273 **** 2025-09-11 00:52:57.995833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-11 00:52:57.995844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-11 00:52:57.995868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-11 00:52:57.995880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-11 00:52:57.995891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-11 00:52:57.995907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c0ffe31263e1ad5f97fcbfe2b76ac2d1b20eb9f0', '__omit_place_holder__c0ffe31263e1ad5f97fcbfe2b76ac2d1b20eb9f0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-11 00:52:57.995918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-11 00:52:57.995930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-11 00:52:57.995941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c0ffe31263e1ad5f97fcbfe2b76ac2d1b20eb9f0', '__omit_place_holder__c0ffe31263e1ad5f97fcbfe2b76ac2d1b20eb9f0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-11 00:52:57.995964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-11 00:52:57.995976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-11 00:52:57.995987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c0ffe31263e1ad5f97fcbfe2b76ac2d1b20eb9f0', '__omit_place_holder__c0ffe31263e1ad5f97fcbfe2b76ac2d1b20eb9f0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-11 00:52:57.995998 | orchestrator | 2025-09-11 00:52:57.996009 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-09-11 00:52:57.996020 | orchestrator | Thursday 11 September 2025 00:47:24 +0000 (0:00:02.938) 0:00:23.212 **** 2025-09-11 00:52:57.996036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-11 00:52:57.996047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-11 00:52:57.996068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-11 00:52:57.996086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-11 00:52:57.996113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-11 00:52:57.996124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-11 00:52:57.996140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-11 00:52:57.996152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-11 00:52:57.996163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-11 00:52:57.996180 | orchestrator | 2025-09-11 00:52:57.996191 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-09-11 00:52:57.996202 | orchestrator | Thursday 11 September 2025 00:47:28 +0000 (0:00:03.053) 0:00:26.265 **** 2025-09-11 00:52:57.996213 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-11 00:52:57.996224 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-11 00:52:57.996235 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-11 00:52:57.996246 | orchestrator | 2025-09-11 00:52:57.996257 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-09-11 00:52:57.996267 | orchestrator | Thursday 11 September 2025 00:47:30 +0000 (0:00:02.520) 0:00:28.786 **** 2025-09-11 00:52:57.996278 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-11 00:52:57.996289 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-11 00:52:57.996300 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-11 00:52:57.996310 | orchestrator | 2025-09-11 00:52:57.996333 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-09-11 00:52:57.996344 | orchestrator | Thursday 11 September 2025 00:47:35 +0000 (0:00:04.918) 0:00:33.704 **** 2025-09-11 00:52:57.996355 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:57.996365 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:57.996376 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:57.996386 | orchestrator | 2025-09-11 00:52:57.996397 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-09-11 00:52:57.996408 | orchestrator | Thursday 11 September 2025 00:47:36 +0000 (0:00:00.564) 0:00:34.268 **** 2025-09-11 00:52:57.996419 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-11 00:52:57.996430 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-11 00:52:57.996440 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-11 00:52:57.996451 | orchestrator | 2025-09-11 00:52:57.996462 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-09-11 00:52:57.996472 | orchestrator | Thursday 11 September 2025 00:47:38 +0000 (0:00:01.966) 0:00:36.235 **** 2025-09-11 00:52:57.996483 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-11 00:52:57.996494 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-11 00:52:57.996505 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-11 00:52:57.996515 | orchestrator | 2025-09-11 00:52:57.996526 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-09-11 00:52:57.996536 | orchestrator | Thursday 11 September 2025 00:47:40 +0000 (0:00:02.651) 0:00:38.887 **** 2025-09-11 00:52:57.996547 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-09-11 00:52:57.996558 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-09-11 00:52:57.996568 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-09-11 00:52:57.996579 | orchestrator | 2025-09-11 00:52:57.996590 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-09-11 00:52:57.996606 | orchestrator | Thursday 11 September 2025 00:47:43 +0000 (0:00:02.345) 0:00:41.232 **** 2025-09-11 00:52:57.996617 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-09-11 00:52:57.996627 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-09-11 00:52:57.996643 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-09-11 00:52:57.996654 | orchestrator | 2025-09-11 00:52:57.996665 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-11 00:52:57.996675 | orchestrator | Thursday 11 September 2025 00:47:44 +0000 (0:00:01.669) 0:00:42.901 **** 2025-09-11 00:52:57.996686 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:52:57.996696 | orchestrator | 2025-09-11 00:52:57.996707 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-09-11 00:52:57.996717 | orchestrator | Thursday 11 September 2025 00:47:45 +0000 (0:00:00.666) 0:00:43.568 **** 2025-09-11 00:52:57.996728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-11 00:52:57.996739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-11 00:52:57.996756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-11 00:52:57.996768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-11 00:52:57.996780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-11 00:52:57.996801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-11 00:52:57.996813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-11 00:52:57.996824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-11 00:52:57.996835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-11 00:52:57.996846 | orchestrator | 2025-09-11 00:52:57.996857 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-09-11 00:52:57.996868 | orchestrator | Thursday 11 September 2025 00:47:49 +0000 (0:00:03.878) 0:00:47.447 **** 2025-09-11 00:52:57.996886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-11 00:52:57.996898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-11 00:52:57.996915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-11 00:52:57.996926 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:57.996945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-11 00:52:57.996956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-11 00:52:57.996967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-11 00:52:57.996978 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:57.996990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-11 00:52:57.997007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-11 00:52:57.997019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-11 00:52:57.997037 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:57.997047 | orchestrator | 2025-09-11 00:52:57.997058 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-09-11 00:52:57.997069 | orchestrator | Thursday 11 September 2025 00:47:49 +0000 (0:00:00.620) 0:00:48.068 **** 2025-09-11 00:52:57.997080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-11 00:52:57.997109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-11 00:52:57.997121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-11 00:52:57.997133 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:57.997144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-11 00:52:57.997161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-11 00:52:57.997172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-11 00:52:57.997190 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:57.997201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-11 00:52:57.997217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-11 00:52:57.997229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-11 00:52:57.997240 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:57.997251 | orchestrator | 2025-09-11 00:52:57.997261 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-11 00:52:57.997272 | orchestrator | Thursday 11 September 2025 00:47:50 +0000 (0:00:00.743) 0:00:48.811 **** 2025-09-11 00:52:57.997283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-11 00:52:57.997300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-11 00:52:57.997312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-11 00:52:57.997328 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:57.997340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-11 00:52:57.997351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-11 00:52:57.997366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-11 00:52:57.997378 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:57.997389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-11 00:52:57.997400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-11 00:52:57.997422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-11 00:52:57.997439 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:57.997450 | orchestrator | 2025-09-11 00:52:57.997461 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-11 00:52:57.997472 | orchestrator | Thursday 11 September 2025 00:47:51 +0000 (0:00:00.730) 0:00:49.541 **** 2025-09-11 00:52:57.997483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-11 00:52:57.997494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-11 00:52:57.997506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-11 00:52:57.997517 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:57.997528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-11 00:52:57.997539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-11 00:52:57.997575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-11 00:52:57.997593 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:57.997610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-11 00:52:57.997622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-11 00:52:57.997633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-11 00:52:57.997644 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:57.997655 | orchestrator | 2025-09-11 00:52:57.997666 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-11 00:52:57.997677 | orchestrator | Thursday 11 September 2025 00:47:51 +0000 (0:00:00.581) 0:00:50.123 **** 2025-09-11 00:52:57.997693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-11 00:52:57.997704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-11 00:52:57.997716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-11 00:52:57.997733 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:57.997750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-11 00:52:57.997762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-11 00:52:57.997773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-11 00:52:57.997784 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:57.997795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-11 00:52:57.997810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-11 00:52:57.997822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-11 00:52:57.997832 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:57.997849 | orchestrator | 2025-09-11 00:52:57.997860 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-09-11 00:52:57.997870 | orchestrator | Thursday 11 September 2025 00:47:52 +0000 (0:00:00.730) 0:00:50.853 **** 2025-09-11 00:52:57.997881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-11 00:52:57.997899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-11 00:52:57.997910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-11 00:52:57.997921 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:57.997932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-11 00:52:57.997948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-11 00:52:57.997959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-11 00:52:57.997980 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:57.997991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-11 00:52:57.998007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-11 00:52:57.998050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-11 00:52:57.998063 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:57.998074 | orchestrator | 2025-09-11 00:52:57.998085 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-09-11 00:52:57.998154 | orchestrator | Thursday 11 September 2025 00:47:53 +0000 (0:00:00.996) 0:00:51.850 **** 2025-09-11 00:52:57.998167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-11 00:52:57.998178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-11 00:52:57.998195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-11 00:52:57.998214 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:57.998225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-11 00:52:57.998236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-11 00:52:57.998255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-11 00:52:57.998267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-11 00:52:57.998278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-11 00:52:57.998289 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:57.998305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-11 00:52:57.998316 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:57.998327 | orchestrator | 2025-09-11 00:52:57.998338 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-09-11 00:52:57.998354 | orchestrator | Thursday 11 September 2025 00:47:54 +0000 (0:00:00.712) 0:00:52.562 **** 2025-09-11 00:52:57.998365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-11 00:52:57.998377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-11 00:52:57.998389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-11 00:52:57.998400 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:57.998418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-11 00:52:57.998429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-11 00:52:57.998440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-11 00:52:57.998451 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:57.998467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-11 00:52:57.998484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-11 00:52:57.998495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-11 00:52:57.998506 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:57.998517 | orchestrator | 2025-09-11 00:52:57.998528 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-09-11 00:52:57.998539 | orchestrator | Thursday 11 September 2025 00:47:56 +0000 (0:00:01.933) 0:00:54.496 **** 2025-09-11 00:52:57.998549 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-11 00:52:57.998560 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-11 00:52:57.998577 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-11 00:52:57.998588 | orchestrator | 2025-09-11 00:52:57.998599 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-09-11 00:52:57.998609 | orchestrator | Thursday 11 September 2025 00:47:59 +0000 (0:00:03.581) 0:00:58.077 **** 2025-09-11 00:52:57.998620 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-11 00:52:57.998631 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-11 00:52:57.998642 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-11 00:52:57.998653 | orchestrator | 2025-09-11 00:52:57.998662 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-09-11 00:52:57.998672 | orchestrator | Thursday 11 September 2025 00:48:01 +0000 (0:00:01.584) 0:00:59.661 **** 2025-09-11 00:52:57.998681 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-11 00:52:57.998691 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-11 00:52:57.998700 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-11 00:52:57.998710 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-11 00:52:57.998719 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:57.998729 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-11 00:52:57.998743 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:57.998753 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-11 00:52:57.998762 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:57.998772 | orchestrator | 2025-09-11 00:52:57.998781 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-09-11 00:52:57.998791 | orchestrator | Thursday 11 September 2025 00:48:02 +0000 (0:00:00.815) 0:01:00.477 **** 2025-09-11 00:52:57.998805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-11 00:52:57.998815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-11 00:52:57.998825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-11 00:52:57.998840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-11 00:52:57.998850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-11 00:52:57.998861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-11 00:52:57.998876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-11 00:52:57.998890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-11 00:52:57.998900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-11 00:52:57.998910 | orchestrator | 2025-09-11 00:52:57.998920 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-09-11 00:52:57.998929 | orchestrator | Thursday 11 September 2025 00:48:04 +0000 (0:00:02.466) 0:01:02.943 **** 2025-09-11 00:52:57.998939 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:52:57.998948 | orchestrator | 2025-09-11 00:52:57.998958 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-09-11 00:52:57.998967 | orchestrator | Thursday 11 September 2025 00:48:05 +0000 (0:00:00.636) 0:01:03.580 **** 2025-09-11 00:52:57.998978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-11 00:52:57.998994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-11 00:52:57.999009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-11 00:52:57.999019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-11 00:52:57.999034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-11 00:52:57.999044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-11 00:52:57.999055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-11 00:52:57.999070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-11 00:52:57.999085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-11 00:52:57.999111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-11 00:52:57.999125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-11 00:52:57.999135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-11 00:52:57.999145 | orchestrator | 2025-09-11 00:52:57.999155 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-09-11 00:52:57.999164 | orchestrator | Thursday 11 September 2025 00:48:11 +0000 (0:00:06.129) 0:01:09.710 **** 2025-09-11 00:52:57.999174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-11 00:52:57.999190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-11 00:52:57.999209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-11 00:52:57.999220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-11 00:52:57.999233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-11 00:52:57.999244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-11 00:52:57.999253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-11 00:52:57.999263 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:57.999273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-11 00:52:57.999288 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:57.999303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-11 00:52:57.999313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-11 00:52:57.999323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-11 00:52:57.999337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-11 00:52:57.999347 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:57.999357 | orchestrator | 2025-09-11 00:52:57.999367 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-09-11 00:52:57.999376 | orchestrator | Thursday 11 September 2025 00:48:12 +0000 (0:00:01.256) 0:01:10.967 **** 2025-09-11 00:52:57.999386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-11 00:52:57.999397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-11 00:52:57.999407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-11 00:52:57.999416 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:57.999426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-11 00:52:57.999441 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:57.999450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-11 00:52:57.999460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-11 00:52:57.999470 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:57.999479 | orchestrator | 2025-09-11 00:52:57.999494 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-09-11 00:52:57.999504 | orchestrator | Thursday 11 September 2025 00:48:13 +0000 (0:00:01.100) 0:01:12.067 **** 2025-09-11 00:52:57.999513 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:52:57.999523 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:52:57.999532 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:52:57.999542 | orchestrator | 2025-09-11 00:52:57.999551 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-09-11 00:52:57.999561 | orchestrator | Thursday 11 September 2025 00:48:15 +0000 (0:00:01.250) 0:01:13.317 **** 2025-09-11 00:52:57.999570 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:52:57.999580 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:52:57.999589 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:52:57.999598 | orchestrator | 2025-09-11 00:52:57.999608 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-09-11 00:52:57.999618 | orchestrator | Thursday 11 September 2025 00:48:16 +0000 (0:00:01.842) 0:01:15.160 **** 2025-09-11 00:52:57.999627 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:52:57.999637 | orchestrator | 2025-09-11 00:52:57.999646 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-09-11 00:52:57.999656 | orchestrator | Thursday 11 September 2025 00:48:17 +0000 (0:00:00.694) 0:01:15.855 **** 2025-09-11 00:52:57.999666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-11 00:52:57.999681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-11 00:52:57.999691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-11 00:52:57.999707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-11 00:52:57.999723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-11 00:52:57.999733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-11 00:52:57.999747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-11 00:52:57.999758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-11 00:52:57.999774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-11 00:52:57.999804 | orchestrator | 2025-09-11 00:52:57.999814 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-09-11 00:52:57.999824 | orchestrator | Thursday 11 September 2025 00:48:20 +0000 (0:00:03.166) 0:01:19.021 **** 2025-09-11 00:52:57.999840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-11 00:52:57.999850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-11 00:52:57.999860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-11 00:52:57.999870 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:57.999885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-11 00:52:57.999903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-11 00:52:57.999913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-11 00:52:57.999923 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.002150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-11 00:52:58.002208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.002228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.002245 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.002264 | orchestrator | 2025-09-11 00:52:58.002281 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-09-11 00:52:58.002311 | orchestrator | Thursday 11 September 2025 00:48:21 +0000 (0:00:00.516) 0:01:19.538 **** 2025-09-11 00:52:58.002328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-11 00:52:58.002359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-11 00:52:58.002375 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.002389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-11 00:52:58.002403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-11 00:52:58.002418 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.002434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-11 00:52:58.002449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-11 00:52:58.002461 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.002474 | orchestrator | 2025-09-11 00:52:58.002486 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-09-11 00:52:58.002499 | orchestrator | Thursday 11 September 2025 00:48:22 +0000 (0:00:00.860) 0:01:20.399 **** 2025-09-11 00:52:58.002511 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:52:58.002524 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:52:58.002537 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:52:58.002550 | orchestrator | 2025-09-11 00:52:58.002563 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-09-11 00:52:58.002576 | orchestrator | Thursday 11 September 2025 00:48:23 +0000 (0:00:01.392) 0:01:21.791 **** 2025-09-11 00:52:58.002588 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:52:58.002600 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:52:58.002613 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:52:58.002626 | orchestrator | 2025-09-11 00:52:58.002651 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-09-11 00:52:58.002664 | orchestrator | Thursday 11 September 2025 00:48:25 +0000 (0:00:01.967) 0:01:23.758 **** 2025-09-11 00:52:58.002677 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.002690 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.002703 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.002714 | orchestrator | 2025-09-11 00:52:58.002726 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-09-11 00:52:58.002737 | orchestrator | Thursday 11 September 2025 00:48:25 +0000 (0:00:00.255) 0:01:24.014 **** 2025-09-11 00:52:58.002749 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:52:58.002761 | orchestrator | 2025-09-11 00:52:58.002772 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-09-11 00:52:58.002784 | orchestrator | Thursday 11 September 2025 00:48:26 +0000 (0:00:00.567) 0:01:24.582 **** 2025-09-11 00:52:58.002796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-11 00:52:58.002828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-11 00:52:58.002842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-11 00:52:58.002855 | orchestrator | 2025-09-11 00:52:58.002867 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-09-11 00:52:58.002880 | orchestrator | Thursday 11 September 2025 00:48:28 +0000 (0:00:02.344) 0:01:26.926 **** 2025-09-11 00:52:58.002902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-11 00:52:58.002915 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.002926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-11 00:52:58.002946 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.002958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-11 00:52:58.002970 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.002982 | orchestrator | 2025-09-11 00:52:58.002995 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-09-11 00:52:58.003012 | orchestrator | Thursday 11 September 2025 00:48:30 +0000 (0:00:01.321) 0:01:28.248 **** 2025-09-11 00:52:58.003026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-11 00:52:58.003039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-11 00:52:58.003052 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.003065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-11 00:52:58.003079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-11 00:52:58.003109 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.003130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-11 00:52:58.003142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-11 00:52:58.003180 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.003193 | orchestrator | 2025-09-11 00:52:58.003205 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-09-11 00:52:58.003218 | orchestrator | Thursday 11 September 2025 00:48:31 +0000 (0:00:01.842) 0:01:30.091 **** 2025-09-11 00:52:58.003229 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.003241 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.003252 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.003262 | orchestrator | 2025-09-11 00:52:58.003273 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-09-11 00:52:58.003285 | orchestrator | Thursday 11 September 2025 00:48:32 +0000 (0:00:00.652) 0:01:30.744 **** 2025-09-11 00:52:58.003297 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.003308 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.003319 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.003330 | orchestrator | 2025-09-11 00:52:58.003341 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-09-11 00:52:58.003352 | orchestrator | Thursday 11 September 2025 00:48:33 +0000 (0:00:01.187) 0:01:31.931 **** 2025-09-11 00:52:58.003364 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:52:58.003375 | orchestrator | 2025-09-11 00:52:58.003387 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-09-11 00:52:58.003398 | orchestrator | Thursday 11 September 2025 00:48:34 +0000 (0:00:00.827) 0:01:32.759 **** 2025-09-11 00:52:58.003410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-11 00:52:58.003424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-11 00:52:58.003436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.003466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.003499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.003515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.003532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.003544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-11 00:52:58.003564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.003582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.003593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.003609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.003622 | orchestrator | 2025-09-11 00:52:58.003633 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-09-11 00:52:58.003645 | orchestrator | Thursday 11 September 2025 00:48:37 +0000 (0:00:03.449) 0:01:36.209 **** 2025-09-11 00:52:58.003657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-11 00:52:58.003678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.003697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.003710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.003722 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.003739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-11 00:52:58.003750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.003762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.003787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.003800 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.003830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-11 00:52:58.003853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.003867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.003877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.003895 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.003907 | orchestrator | 2025-09-11 00:52:58.003918 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-09-11 00:52:58.003929 | orchestrator | Thursday 11 September 2025 00:48:38 +0000 (0:00:00.958) 0:01:37.168 **** 2025-09-11 00:52:58.003941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-11 00:52:58.003959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-11 00:52:58.003972 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.003984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-11 00:52:58.003995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-11 00:52:58.004007 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.004018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-11 00:52:58.004031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-11 00:52:58.004043 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.004055 | orchestrator | 2025-09-11 00:52:58.004066 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-09-11 00:52:58.004079 | orchestrator | Thursday 11 September 2025 00:48:39 +0000 (0:00:00.897) 0:01:38.065 **** 2025-09-11 00:52:58.004111 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:52:58.004124 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:52:58.004135 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:52:58.004146 | orchestrator | 2025-09-11 00:52:58.004157 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-09-11 00:52:58.004169 | orchestrator | Thursday 11 September 2025 00:48:41 +0000 (0:00:01.313) 0:01:39.379 **** 2025-09-11 00:52:58.004180 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:52:58.004191 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:52:58.004202 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:52:58.004213 | orchestrator | 2025-09-11 00:52:58.004224 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-09-11 00:52:58.004236 | orchestrator | Thursday 11 September 2025 00:48:43 +0000 (0:00:01.989) 0:01:41.368 **** 2025-09-11 00:52:58.004248 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.004260 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.004271 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.004283 | orchestrator | 2025-09-11 00:52:58.004301 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-09-11 00:52:58.004312 | orchestrator | Thursday 11 September 2025 00:48:43 +0000 (0:00:00.493) 0:01:41.862 **** 2025-09-11 00:52:58.004324 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.004345 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.004356 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.004367 | orchestrator | 2025-09-11 00:52:58.004379 | orchestrator | TASK [include_role : designate] ************************************************ 2025-09-11 00:52:58.004390 | orchestrator | Thursday 11 September 2025 00:48:43 +0000 (0:00:00.325) 0:01:42.188 **** 2025-09-11 00:52:58.004400 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:52:58.004411 | orchestrator | 2025-09-11 00:52:58.004421 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-09-11 00:52:58.004432 | orchestrator | Thursday 11 September 2025 00:48:44 +0000 (0:00:00.783) 0:01:42.972 **** 2025-09-11 00:52:58.004444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-11 00:52:58.004467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-11 00:52:58.004481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.004493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.004505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.004530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.004542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.004553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-11 00:52:58.004572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-11 00:52:58.004585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.004597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.004620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.004634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-11 00:52:58.004647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.004666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-11 00:52:58.004679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.004691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.004710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.004728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.004740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.004752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.004765 | orchestrator | 2025-09-11 00:52:58.004776 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-09-11 00:52:58.004787 | orchestrator | Thursday 11 September 2025 00:48:49 +0000 (0:00:04.356) 0:01:47.328 **** 2025-09-11 00:52:58.004805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-11 00:52:58.004818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-11 00:52:58.004837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.004854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.004867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.004879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.004905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.004917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-11 00:52:58.004936 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.004948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-11 00:52:58.004965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.004977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.004989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.005008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.005022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-11 00:52:58.005041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.005053 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.005075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-11 00:52:58.005087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.005117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.005130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.005150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.005163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.005182 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.005194 | orchestrator | 2025-09-11 00:52:58.005208 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-09-11 00:52:58.005222 | orchestrator | Thursday 11 September 2025 00:48:50 +0000 (0:00:00.968) 0:01:48.296 **** 2025-09-11 00:52:58.005235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-11 00:52:58.005247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-11 00:52:58.005260 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.005272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-11 00:52:58.005290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-11 00:52:58.005303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-11 00:52:58.005315 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.005327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-11 00:52:58.005338 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.005350 | orchestrator | 2025-09-11 00:52:58.005362 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-09-11 00:52:58.005374 | orchestrator | Thursday 11 September 2025 00:48:51 +0000 (0:00:01.021) 0:01:49.317 **** 2025-09-11 00:52:58.005386 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:52:58.005398 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:52:58.005410 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:52:58.005422 | orchestrator | 2025-09-11 00:52:58.005434 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-09-11 00:52:58.005446 | orchestrator | Thursday 11 September 2025 00:48:52 +0000 (0:00:01.582) 0:01:50.900 **** 2025-09-11 00:52:58.005459 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:52:58.005471 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:52:58.005483 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:52:58.005495 | orchestrator | 2025-09-11 00:52:58.005508 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-09-11 00:52:58.005520 | orchestrator | Thursday 11 September 2025 00:48:54 +0000 (0:00:01.611) 0:01:52.511 **** 2025-09-11 00:52:58.005532 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.005544 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.005557 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.005568 | orchestrator | 2025-09-11 00:52:58.005581 | orchestrator | TASK [include_role : glance] *************************************************** 2025-09-11 00:52:58.005592 | orchestrator | Thursday 11 September 2025 00:48:54 +0000 (0:00:00.500) 0:01:53.012 **** 2025-09-11 00:52:58.005604 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:52:58.005625 | orchestrator | 2025-09-11 00:52:58.005637 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-09-11 00:52:58.005649 | orchestrator | Thursday 11 September 2025 00:48:55 +0000 (0:00:00.804) 0:01:53.816 **** 2025-09-11 00:52:58.005997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-11 00:52:58.006059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-11 00:52:58.006085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-11 00:52:58.006130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-11 00:52:58.006151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-11 00:52:58.006173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-11 00:52:58.006186 | orchestrator | 2025-09-11 00:52:58.006197 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-09-11 00:52:58.006208 | orchestrator | Thursday 11 September 2025 00:48:59 +0000 (0:00:04.104) 0:01:57.921 **** 2025-09-11 00:52:58.006225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-11 00:52:58.006246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-11 00:52:58.006258 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.006275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-11 00:52:58.006300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-11 00:52:58.006314 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.006329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-11 00:52:58.006354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-11 00:52:58.006367 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.006377 | orchestrator | 2025-09-11 00:52:58.006389 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-09-11 00:52:58.006401 | orchestrator | Thursday 11 September 2025 00:49:02 +0000 (0:00:02.966) 0:02:00.887 **** 2025-09-11 00:52:58.006414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-11 00:52:58.006458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-11 00:52:58.006473 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.006485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-11 00:52:58.006506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-11 00:52:58.006520 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.006533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-11 00:52:58.006553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-11 00:52:58.006566 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.006578 | orchestrator | 2025-09-11 00:52:58.006590 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-09-11 00:52:58.006603 | orchestrator | Thursday 11 September 2025 00:49:05 +0000 (0:00:03.133) 0:02:04.021 **** 2025-09-11 00:52:58.006615 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:52:58.006628 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:52:58.006640 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:52:58.006653 | orchestrator | 2025-09-11 00:52:58.006666 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-09-11 00:52:58.006680 | orchestrator | Thursday 11 September 2025 00:49:07 +0000 (0:00:01.229) 0:02:05.250 **** 2025-09-11 00:52:58.006693 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:52:58.006705 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:52:58.006718 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:52:58.006730 | orchestrator | 2025-09-11 00:52:58.006743 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-09-11 00:52:58.006756 | orchestrator | Thursday 11 September 2025 00:49:09 +0000 (0:00:02.009) 0:02:07.259 **** 2025-09-11 00:52:58.006769 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.006782 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.006794 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.006807 | orchestrator | 2025-09-11 00:52:58.006820 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-09-11 00:52:58.006833 | orchestrator | Thursday 11 September 2025 00:49:09 +0000 (0:00:00.464) 0:02:07.723 **** 2025-09-11 00:52:58.006847 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:52:58.006860 | orchestrator | 2025-09-11 00:52:58.006873 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-09-11 00:52:58.006885 | orchestrator | Thursday 11 September 2025 00:49:10 +0000 (0:00:00.817) 0:02:08.541 **** 2025-09-11 00:52:58.006902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-11 00:52:58.006925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-11 00:52:58.006938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-11 00:52:58.006951 | orchestrator | 2025-09-11 00:52:58.006964 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-09-11 00:52:58.006976 | orchestrator | Thursday 11 September 2025 00:49:13 +0000 (0:00:03.172) 0:02:11.713 **** 2025-09-11 00:52:58.006994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-11 00:52:58.007007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-11 00:52:58.007021 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.007033 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.007046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-11 00:52:58.007070 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.007081 | orchestrator | 2025-09-11 00:52:58.007147 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-09-11 00:52:58.007161 | orchestrator | Thursday 11 September 2025 00:49:13 +0000 (0:00:00.437) 0:02:12.151 **** 2025-09-11 00:52:58.007178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-11 00:52:58.007191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-11 00:52:58.007203 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.007215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-11 00:52:58.007227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-11 00:52:58.007238 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.007248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-11 00:52:58.007260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-11 00:52:58.007271 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.007282 | orchestrator | 2025-09-11 00:52:58.007294 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-09-11 00:52:58.007307 | orchestrator | Thursday 11 September 2025 00:49:14 +0000 (0:00:00.585) 0:02:12.736 **** 2025-09-11 00:52:58.007319 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:52:58.007330 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:52:58.007342 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:52:58.007352 | orchestrator | 2025-09-11 00:52:58.007362 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-09-11 00:52:58.007371 | orchestrator | Thursday 11 September 2025 00:49:15 +0000 (0:00:01.224) 0:02:13.961 **** 2025-09-11 00:52:58.007381 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:52:58.007392 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:52:58.007403 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:52:58.007413 | orchestrator | 2025-09-11 00:52:58.007424 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-09-11 00:52:58.007435 | orchestrator | Thursday 11 September 2025 00:49:17 +0000 (0:00:01.815) 0:02:15.776 **** 2025-09-11 00:52:58.007444 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.007455 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.007472 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.007483 | orchestrator | 2025-09-11 00:52:58.007493 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-09-11 00:52:58.007503 | orchestrator | Thursday 11 September 2025 00:49:17 +0000 (0:00:00.379) 0:02:16.156 **** 2025-09-11 00:52:58.007513 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:52:58.007523 | orchestrator | 2025-09-11 00:52:58.007533 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-09-11 00:52:58.007543 | orchestrator | Thursday 11 September 2025 00:49:18 +0000 (0:00:00.790) 0:02:16.947 **** 2025-09-11 00:52:58.007560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-11 00:52:58.007589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-11 00:52:58.007608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-11 00:52:58.007620 | orchestrator | 2025-09-11 00:52:58.007630 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-09-11 00:52:58.007641 | orchestrator | Thursday 11 September 2025 00:49:22 +0000 (0:00:03.721) 0:02:20.669 **** 2025-09-11 00:52:58.007675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-11 00:52:58.007695 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.007709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-11 00:52:58.007721 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.007738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-11 00:52:58.007756 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.007767 | orchestrator | 2025-09-11 00:52:58.007777 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-09-11 00:52:58.007788 | orchestrator | Thursday 11 September 2025 00:49:23 +0000 (0:00:01.054) 0:02:21.723 **** 2025-09-11 00:52:58.007799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-11 00:52:58.007815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-11 00:52:58.007826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-11 00:52:58.007837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-11 00:52:58.007848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-11 00:52:58.007859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-11 00:52:58.007870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-11 00:52:58.007944 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.007958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-11 00:52:58.007983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-11 00:52:58.007996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-11 00:52:58.008007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-11 00:52:58.008017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-11 00:52:58.008028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-11 00:52:58.008039 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.008050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-11 00:52:58.008065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-11 00:52:58.008077 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.008089 | orchestrator | 2025-09-11 00:52:58.008117 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-09-11 00:52:58.008127 | orchestrator | Thursday 11 September 2025 00:49:24 +0000 (0:00:00.792) 0:02:22.516 **** 2025-09-11 00:52:58.008137 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:52:58.008148 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:52:58.008158 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:52:58.008169 | orchestrator | 2025-09-11 00:52:58.008180 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-09-11 00:52:58.008191 | orchestrator | Thursday 11 September 2025 00:49:25 +0000 (0:00:01.523) 0:02:24.039 **** 2025-09-11 00:52:58.008202 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:52:58.008212 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:52:58.008223 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:52:58.008233 | orchestrator | 2025-09-11 00:52:58.008244 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-09-11 00:52:58.008254 | orchestrator | Thursday 11 September 2025 00:49:27 +0000 (0:00:01.920) 0:02:25.960 **** 2025-09-11 00:52:58.008264 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.008275 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.008286 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.008296 | orchestrator | 2025-09-11 00:52:58.008306 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-09-11 00:52:58.008317 | orchestrator | Thursday 11 September 2025 00:49:28 +0000 (0:00:00.273) 0:02:26.233 **** 2025-09-11 00:52:58.008336 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.008346 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.008353 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.008359 | orchestrator | 2025-09-11 00:52:58.008365 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-09-11 00:52:58.008371 | orchestrator | Thursday 11 September 2025 00:49:28 +0000 (0:00:00.393) 0:02:26.627 **** 2025-09-11 00:52:58.008378 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:52:58.008384 | orchestrator | 2025-09-11 00:52:58.008390 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-09-11 00:52:58.008396 | orchestrator | Thursday 11 September 2025 00:49:29 +0000 (0:00:00.923) 0:02:27.550 **** 2025-09-11 00:52:58.008410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-11 00:52:58.008418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-11 00:52:58.008429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-11 00:52:58.008437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-11 00:52:58.008447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-11 00:52:58.008457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-11 00:52:58.008464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-11 00:52:58.008471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-11 00:52:58.008482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-11 00:52:58.008489 | orchestrator | 2025-09-11 00:52:58.008495 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-09-11 00:52:58.008502 | orchestrator | Thursday 11 September 2025 00:49:33 +0000 (0:00:03.981) 0:02:31.532 **** 2025-09-11 00:52:58.008508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-11 00:52:58.008519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-11 00:52:58.008530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-11 00:52:58.008540 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.008551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-11 00:52:58.008566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-11 00:52:58.008579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-11 00:52:58.008595 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.008607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-11 00:52:58.008781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-11 00:52:58.008801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-11 00:52:58.008813 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.008825 | orchestrator | 2025-09-11 00:52:58.008837 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-09-11 00:52:58.008848 | orchestrator | Thursday 11 September 2025 00:49:34 +0000 (0:00:00.922) 0:02:32.455 **** 2025-09-11 00:52:58.008859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-11 00:52:58.008870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-11 00:52:58.008880 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.008892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-11 00:52:58.008915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-11 00:52:58.008926 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.008937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-11 00:52:58.008948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-11 00:52:58.008959 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.008971 | orchestrator | 2025-09-11 00:52:58.008982 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-09-11 00:52:58.008992 | orchestrator | Thursday 11 September 2025 00:49:35 +0000 (0:00:00.963) 0:02:33.418 **** 2025-09-11 00:52:58.009003 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:52:58.009013 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:52:58.009023 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:52:58.009034 | orchestrator | 2025-09-11 00:52:58.009044 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-09-11 00:52:58.009055 | orchestrator | Thursday 11 September 2025 00:49:36 +0000 (0:00:01.299) 0:02:34.718 **** 2025-09-11 00:52:58.009065 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:52:58.009077 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:52:58.009087 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:52:58.009149 | orchestrator | 2025-09-11 00:52:58.009161 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-09-11 00:52:58.009172 | orchestrator | Thursday 11 September 2025 00:49:38 +0000 (0:00:02.103) 0:02:36.821 **** 2025-09-11 00:52:58.009182 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.009193 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.009203 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.009213 | orchestrator | 2025-09-11 00:52:58.009224 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-09-11 00:52:58.009235 | orchestrator | Thursday 11 September 2025 00:49:39 +0000 (0:00:00.536) 0:02:37.358 **** 2025-09-11 00:52:58.009245 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:52:58.009256 | orchestrator | 2025-09-11 00:52:58.009266 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-09-11 00:52:58.009277 | orchestrator | Thursday 11 September 2025 00:49:40 +0000 (0:00:00.937) 0:02:38.295 **** 2025-09-11 00:52:58.009398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-11 00:52:58.009416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.009443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-11 00:52:58.009454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.009464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-11 00:52:58.009539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.009554 | orchestrator | 2025-09-11 00:52:58.009571 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-09-11 00:52:58.009581 | orchestrator | Thursday 11 September 2025 00:49:44 +0000 (0:00:04.144) 0:02:42.440 **** 2025-09-11 00:52:58.009589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-11 00:52:58.009603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.009613 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.009622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-11 00:52:58.009692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.009706 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.009716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-11 00:52:58.009734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.009743 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.009752 | orchestrator | 2025-09-11 00:52:58.009761 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-09-11 00:52:58.009770 | orchestrator | Thursday 11 September 2025 00:49:45 +0000 (0:00:01.055) 0:02:43.495 **** 2025-09-11 00:52:58.009783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-11 00:52:58.009792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-11 00:52:58.009802 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.009810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-11 00:52:58.009819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-11 00:52:58.009828 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.009836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-11 00:52:58.009844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-11 00:52:58.009853 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.009862 | orchestrator | 2025-09-11 00:52:58.009871 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-09-11 00:52:58.009880 | orchestrator | Thursday 11 September 2025 00:49:46 +0000 (0:00:00.799) 0:02:44.295 **** 2025-09-11 00:52:58.009890 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:52:58.009899 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:52:58.009925 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:52:58.009937 | orchestrator | 2025-09-11 00:52:58.009947 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-09-11 00:52:58.009957 | orchestrator | Thursday 11 September 2025 00:49:47 +0000 (0:00:01.290) 0:02:45.585 **** 2025-09-11 00:52:58.009966 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:52:58.009976 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:52:58.009985 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:52:58.010002 | orchestrator | 2025-09-11 00:52:58.010031 | orchestrator | TASK [include_role : manila] *************************************************** 2025-09-11 00:52:58.010042 | orchestrator | Thursday 11 September 2025 00:49:49 +0000 (0:00:02.196) 0:02:47.782 **** 2025-09-11 00:52:58.010136 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:52:58.010152 | orchestrator | 2025-09-11 00:52:58.010161 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-09-11 00:52:58.010171 | orchestrator | Thursday 11 September 2025 00:49:50 +0000 (0:00:01.208) 0:02:48.991 **** 2025-09-11 00:52:58.010181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-11 00:52:58.010191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.010206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.010217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.010226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-11 00:52:58.010306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.010320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.010331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.010348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-11 00:52:58.010359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.010370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.010445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.010459 | orchestrator | 2025-09-11 00:52:58.010469 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-09-11 00:52:58.010500 | orchestrator | Thursday 11 September 2025 00:49:54 +0000 (0:00:03.970) 0:02:52.961 **** 2025-09-11 00:52:58.010510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-11 00:52:58.010521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.010536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.010547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.010557 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.010568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-11 00:52:58.010647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.010660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.010671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.010679 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.010693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-11 00:52:58.010704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.010735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.010806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.010822 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.010832 | orchestrator | 2025-09-11 00:52:58.010842 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-09-11 00:52:58.010851 | orchestrator | Thursday 11 September 2025 00:49:55 +0000 (0:00:00.598) 0:02:53.560 **** 2025-09-11 00:52:58.010861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-11 00:52:58.010870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-11 00:52:58.010880 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.010889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-11 00:52:58.010898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-11 00:52:58.010907 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.010915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-11 00:52:58.010924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-11 00:52:58.010933 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.010942 | orchestrator | 2025-09-11 00:52:58.010951 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-09-11 00:52:58.010960 | orchestrator | Thursday 11 September 2025 00:49:56 +0000 (0:00:01.121) 0:02:54.681 **** 2025-09-11 00:52:58.010970 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:52:58.010979 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:52:58.010988 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:52:58.010996 | orchestrator | 2025-09-11 00:52:58.011005 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-09-11 00:52:58.011019 | orchestrator | Thursday 11 September 2025 00:49:57 +0000 (0:00:01.287) 0:02:55.969 **** 2025-09-11 00:52:58.011029 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:52:58.011044 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:52:58.011054 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:52:58.011063 | orchestrator | 2025-09-11 00:52:58.011073 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-09-11 00:52:58.011114 | orchestrator | Thursday 11 September 2025 00:49:59 +0000 (0:00:02.036) 0:02:58.005 **** 2025-09-11 00:52:58.011125 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:52:58.011134 | orchestrator | 2025-09-11 00:52:58.011144 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-09-11 00:52:58.011153 | orchestrator | Thursday 11 September 2025 00:50:01 +0000 (0:00:01.449) 0:02:59.455 **** 2025-09-11 00:52:58.011162 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-11 00:52:58.011172 | orchestrator | 2025-09-11 00:52:58.011181 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-09-11 00:52:58.011189 | orchestrator | Thursday 11 September 2025 00:50:04 +0000 (0:00:03.030) 0:03:02.486 **** 2025-09-11 00:52:58.011265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-11 00:52:58.011279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-11 00:52:58.011289 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.011304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-11 00:52:58.011322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-11 00:52:58.011332 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.011451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-11 00:52:58.011467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-11 00:52:58.011484 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.011494 | orchestrator | 2025-09-11 00:52:58.011507 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-09-11 00:52:58.011516 | orchestrator | Thursday 11 September 2025 00:50:06 +0000 (0:00:02.075) 0:03:04.561 **** 2025-09-11 00:52:58.011526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-11 00:52:58.011592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-11 00:52:58.011605 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.011620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-11 00:52:58.011638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-11 00:52:58.011648 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.011711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-11 00:52:58.011725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-11 00:52:58.011741 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.011751 | orchestrator | 2025-09-11 00:52:58.011761 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-09-11 00:52:58.011770 | orchestrator | Thursday 11 September 2025 00:50:08 +0000 (0:00:01.897) 0:03:06.458 **** 2025-09-11 00:52:58.011803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-11 00:52:58.011817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-11 00:52:58.011827 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.011837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-11 00:52:58.011847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-11 00:52:58.011857 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.011943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-11 00:52:58.011959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-11 00:52:58.011976 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.011986 | orchestrator | 2025-09-11 00:52:58.011995 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-09-11 00:52:58.012005 | orchestrator | Thursday 11 September 2025 00:50:10 +0000 (0:00:02.310) 0:03:08.769 **** 2025-09-11 00:52:58.012029 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:52:58.012040 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:52:58.012049 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:52:58.012059 | orchestrator | 2025-09-11 00:52:58.012068 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-09-11 00:52:58.012078 | orchestrator | Thursday 11 September 2025 00:50:12 +0000 (0:00:01.807) 0:03:10.576 **** 2025-09-11 00:52:58.012088 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.012112 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.012121 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.012129 | orchestrator | 2025-09-11 00:52:58.012138 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-09-11 00:52:58.012146 | orchestrator | Thursday 11 September 2025 00:50:13 +0000 (0:00:01.349) 0:03:11.926 **** 2025-09-11 00:52:58.012155 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.012164 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.012173 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.012182 | orchestrator | 2025-09-11 00:52:58.012196 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-09-11 00:52:58.012205 | orchestrator | Thursday 11 September 2025 00:50:14 +0000 (0:00:00.334) 0:03:12.261 **** 2025-09-11 00:52:58.012214 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:52:58.012224 | orchestrator | 2025-09-11 00:52:58.012233 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-09-11 00:52:58.012242 | orchestrator | Thursday 11 September 2025 00:50:15 +0000 (0:00:01.316) 0:03:13.577 **** 2025-09-11 00:52:58.012252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-11 00:52:58.012263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-11 00:52:58.012337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-11 00:52:58.012358 | orchestrator | 2025-09-11 00:52:58.012368 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-09-11 00:52:58.012378 | orchestrator | Thursday 11 September 2025 00:50:16 +0000 (0:00:01.457) 0:03:15.035 **** 2025-09-11 00:52:58.012388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-11 00:52:58.012401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-11 00:52:58.012412 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.012422 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.012432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-11 00:52:58.012443 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.012452 | orchestrator | 2025-09-11 00:52:58.012462 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-09-11 00:52:58.012471 | orchestrator | Thursday 11 September 2025 00:50:17 +0000 (0:00:00.366) 0:03:15.401 **** 2025-09-11 00:52:58.012481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-11 00:52:58.012491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-11 00:52:58.012506 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.012516 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.012579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-11 00:52:58.012590 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.012599 | orchestrator | 2025-09-11 00:52:58.012608 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-09-11 00:52:58.012618 | orchestrator | Thursday 11 September 2025 00:50:17 +0000 (0:00:00.796) 0:03:16.197 **** 2025-09-11 00:52:58.012627 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.012636 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.012645 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.012654 | orchestrator | 2025-09-11 00:52:58.012663 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-09-11 00:52:58.012672 | orchestrator | Thursday 11 September 2025 00:50:18 +0000 (0:00:00.436) 0:03:16.634 **** 2025-09-11 00:52:58.012681 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.012690 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.012699 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.012708 | orchestrator | 2025-09-11 00:52:58.012731 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-09-11 00:52:58.012742 | orchestrator | Thursday 11 September 2025 00:50:19 +0000 (0:00:01.253) 0:03:17.888 **** 2025-09-11 00:52:58.012752 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.012762 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.012772 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.012782 | orchestrator | 2025-09-11 00:52:58.012793 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-09-11 00:52:58.012803 | orchestrator | Thursday 11 September 2025 00:50:19 +0000 (0:00:00.323) 0:03:18.211 **** 2025-09-11 00:52:58.012813 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:52:58.012824 | orchestrator | 2025-09-11 00:52:58.012834 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-09-11 00:52:58.012844 | orchestrator | Thursday 11 September 2025 00:50:21 +0000 (0:00:01.504) 0:03:19.716 **** 2025-09-11 00:52:58.012859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-11 00:52:58.012871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.012889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.012955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-11 00:52:58.012969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.012984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-11 00:52:58.012995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.013012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.013071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.013082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.013135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-11 00:52:58.013151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-11 00:52:58.013160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-11 00:52:58.013176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.013185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.013263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-11 00:52:58.013276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-11 00:52:58.013286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-11 00:52:58.013295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.013318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-11 00:52:58.013334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.013343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-11 00:52:58.013401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-11 00:52:58.013413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.013423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.013436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-11 00:52:58.013455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-11 00:52:58.013463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-11 00:52:58.013533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-11 00:52:58.013546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.013555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-11 00:52:58.013573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-11 00:52:58.013582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-11 00:52:58.013632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.013655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.013665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.013677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-11 00:52:58.013691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.013701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-11 00:52:58.013710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-11 00:52:58.013764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.013775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-11 00:52:58.013785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.013802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-11 00:52:58.013811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-11 00:52:58.013833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.013889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-11 00:52:58.013901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-11 00:52:58.013910 | orchestrator | 2025-09-11 00:52:58.013918 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-09-11 00:52:58.013926 | orchestrator | Thursday 11 September 2025 00:50:25 +0000 (0:00:04.445) 0:03:24.161 **** 2025-09-11 00:52:58.013938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-11 00:52:58.013953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.013962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.014179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.014250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-11 00:52:58.014289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-11 00:52:58.014311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.014324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.014507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.014530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-11 00:52:58.014543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.014566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-11 00:52:58.014584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-11 00:52:58.014596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.014608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-11 00:52:58.014711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.014729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-11 00:52:58.014754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.014766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-11 00:52:58.014778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.014790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.014872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-11 00:52:58.014888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-11 00:52:58.014910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.014937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-11 00:52:58.014949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.014960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-11 00:52:58.015042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.015058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-11 00:52:58.015077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.015119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-11 00:52:58.015133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.015154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-11 00:52:58.015236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-11 00:52:58.015252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-11 00:52:58.015272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-11 00:52:58.015284 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.015296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-11 00:52:58.015313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.015331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.015347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-11 00:52:58.015395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-11 00:52:58.015415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.015427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-11 00:52:58.015443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-11 00:52:58.015454 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.015466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-11 00:52:58.015477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.015519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-11 00:52:58.015551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-11 00:52:58.015563 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.015575 | orchestrator | 2025-09-11 00:52:58.015587 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-09-11 00:52:58.015598 | orchestrator | Thursday 11 September 2025 00:50:27 +0000 (0:00:01.360) 0:03:25.521 **** 2025-09-11 00:52:58.015610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-11 00:52:58.015621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-11 00:52:58.015633 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.015643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-11 00:52:58.015659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-11 00:52:58.015670 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.015681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-11 00:52:58.015692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-11 00:52:58.015703 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.015713 | orchestrator | 2025-09-11 00:52:58.015733 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-09-11 00:52:58.015745 | orchestrator | Thursday 11 September 2025 00:50:28 +0000 (0:00:01.656) 0:03:27.178 **** 2025-09-11 00:52:58.015755 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:52:58.015766 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:52:58.015777 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:52:58.015787 | orchestrator | 2025-09-11 00:52:58.015798 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-09-11 00:52:58.015809 | orchestrator | Thursday 11 September 2025 00:50:30 +0000 (0:00:01.170) 0:03:28.348 **** 2025-09-11 00:52:58.015819 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:52:58.015830 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:52:58.015841 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:52:58.015852 | orchestrator | 2025-09-11 00:52:58.015871 | orchestrator | TASK [include_role : placement] ************************************************ 2025-09-11 00:52:58.015883 | orchestrator | Thursday 11 September 2025 00:50:31 +0000 (0:00:01.853) 0:03:30.202 **** 2025-09-11 00:52:58.015895 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:52:58.015908 | orchestrator | 2025-09-11 00:52:58.015921 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-09-11 00:52:58.015933 | orchestrator | Thursday 11 September 2025 00:50:33 +0000 (0:00:01.192) 0:03:31.394 **** 2025-09-11 00:52:58.015990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-11 00:52:58.016008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-11 00:52:58.016027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-11 00:52:58.016040 | orchestrator | 2025-09-11 00:52:58.016052 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-09-11 00:52:58.016065 | orchestrator | Thursday 11 September 2025 00:50:36 +0000 (0:00:03.519) 0:03:34.914 **** 2025-09-11 00:52:58.016078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-11 00:52:58.016165 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.016216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-11 00:52:58.016231 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.016243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-11 00:52:58.016254 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.016265 | orchestrator | 2025-09-11 00:52:58.016276 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-09-11 00:52:58.016287 | orchestrator | Thursday 11 September 2025 00:50:37 +0000 (0:00:00.465) 0:03:35.379 **** 2025-09-11 00:52:58.016298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-11 00:52:58.016309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-11 00:52:58.016321 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.016332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-11 00:52:58.016348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-11 00:52:58.016359 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.016370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-11 00:52:58.016387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-11 00:52:58.016398 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.016409 | orchestrator | 2025-09-11 00:52:58.016420 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-09-11 00:52:58.016430 | orchestrator | Thursday 11 September 2025 00:50:37 +0000 (0:00:00.667) 0:03:36.047 **** 2025-09-11 00:52:58.016441 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:52:58.016452 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:52:58.016462 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:52:58.016473 | orchestrator | 2025-09-11 00:52:58.016484 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-09-11 00:52:58.016494 | orchestrator | Thursday 11 September 2025 00:50:39 +0000 (0:00:01.254) 0:03:37.302 **** 2025-09-11 00:52:58.016505 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:52:58.016515 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:52:58.016526 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:52:58.016537 | orchestrator | 2025-09-11 00:52:58.016547 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-09-11 00:52:58.016558 | orchestrator | Thursday 11 September 2025 00:50:41 +0000 (0:00:02.018) 0:03:39.321 **** 2025-09-11 00:52:58.016568 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:52:58.016579 | orchestrator | 2025-09-11 00:52:58.016590 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-09-11 00:52:58.016600 | orchestrator | Thursday 11 September 2025 00:50:42 +0000 (0:00:01.316) 0:03:40.638 **** 2025-09-11 00:52:58.016642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-11 00:52:58.016657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.016669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.016693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-11 00:52:58.016706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.016747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-11 00:52:58.016760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.016777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.016795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.016807 | orchestrator | 2025-09-11 00:52:58.016818 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-09-11 00:52:58.016828 | orchestrator | Thursday 11 September 2025 00:50:46 +0000 (0:00:03.817) 0:03:44.455 **** 2025-09-11 00:52:58.016870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-11 00:52:58.016884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.016896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.016907 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.016923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-11 00:52:58.016942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.016954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.016965 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.017005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-11 00:52:58.017019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.017043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.017055 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.017065 | orchestrator | 2025-09-11 00:52:58.017076 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-09-11 00:52:58.017087 | orchestrator | Thursday 11 September 2025 00:50:46 +0000 (0:00:00.739) 0:03:45.195 **** 2025-09-11 00:52:58.017115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-11 00:52:58.017127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-11 00:52:58.017139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-11 00:52:58.017150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-11 00:52:58.017161 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.017172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-11 00:52:58.017183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-11 00:52:58.017194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-11 00:52:58.017211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-11 00:52:58.017258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-11 00:52:58.017271 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.017282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-11 00:52:58.017294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-11 00:52:58.017315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-11 00:52:58.017327 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.017338 | orchestrator | 2025-09-11 00:52:58.017349 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-09-11 00:52:58.017360 | orchestrator | Thursday 11 September 2025 00:50:47 +0000 (0:00:01.003) 0:03:46.199 **** 2025-09-11 00:52:58.017370 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:52:58.017381 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:52:58.017392 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:52:58.017402 | orchestrator | 2025-09-11 00:52:58.017413 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-09-11 00:52:58.017424 | orchestrator | Thursday 11 September 2025 00:50:49 +0000 (0:00:01.383) 0:03:47.583 **** 2025-09-11 00:52:58.017435 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:52:58.017445 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:52:58.017456 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:52:58.017466 | orchestrator | 2025-09-11 00:52:58.017477 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-09-11 00:52:58.017488 | orchestrator | Thursday 11 September 2025 00:50:51 +0000 (0:00:02.051) 0:03:49.634 **** 2025-09-11 00:52:58.017498 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:52:58.017509 | orchestrator | 2025-09-11 00:52:58.017520 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-09-11 00:52:58.017531 | orchestrator | Thursday 11 September 2025 00:50:52 +0000 (0:00:01.514) 0:03:51.149 **** 2025-09-11 00:52:58.017546 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-09-11 00:52:58.017558 | orchestrator | 2025-09-11 00:52:58.017568 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-09-11 00:52:58.017579 | orchestrator | Thursday 11 September 2025 00:50:53 +0000 (0:00:00.789) 0:03:51.938 **** 2025-09-11 00:52:58.017590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-11 00:52:58.017602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-11 00:52:58.017614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-11 00:52:58.017625 | orchestrator | 2025-09-11 00:52:58.017636 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-09-11 00:52:58.017653 | orchestrator | Thursday 11 September 2025 00:50:57 +0000 (0:00:04.084) 0:03:56.023 **** 2025-09-11 00:52:58.017694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-11 00:52:58.017707 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.017718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-11 00:52:58.017729 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.017740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-11 00:52:58.017751 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.017762 | orchestrator | 2025-09-11 00:52:58.017773 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-09-11 00:52:58.017784 | orchestrator | Thursday 11 September 2025 00:50:59 +0000 (0:00:01.385) 0:03:57.408 **** 2025-09-11 00:52:58.017795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-11 00:52:58.017811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-11 00:52:58.017823 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.017834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-11 00:52:58.017845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-11 00:52:58.017856 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.017866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-11 00:52:58.017878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-11 00:52:58.017889 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.017899 | orchestrator | 2025-09-11 00:52:58.017910 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-11 00:52:58.017928 | orchestrator | Thursday 11 September 2025 00:51:00 +0000 (0:00:01.551) 0:03:58.959 **** 2025-09-11 00:52:58.017938 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:52:58.017949 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:52:58.017959 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:52:58.017970 | orchestrator | 2025-09-11 00:52:58.017981 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-11 00:52:58.017992 | orchestrator | Thursday 11 September 2025 00:51:03 +0000 (0:00:02.530) 0:04:01.489 **** 2025-09-11 00:52:58.018002 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:52:58.018013 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:52:58.018057 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:52:58.018068 | orchestrator | 2025-09-11 00:52:58.018079 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-09-11 00:52:58.018105 | orchestrator | Thursday 11 September 2025 00:51:06 +0000 (0:00:03.036) 0:04:04.526 **** 2025-09-11 00:52:58.018151 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-09-11 00:52:58.018164 | orchestrator | 2025-09-11 00:52:58.018175 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-09-11 00:52:58.018186 | orchestrator | Thursday 11 September 2025 00:51:07 +0000 (0:00:01.278) 0:04:05.805 **** 2025-09-11 00:52:58.018197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-11 00:52:58.018209 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.018220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-11 00:52:58.018231 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.018242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-11 00:52:58.018253 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.018264 | orchestrator | 2025-09-11 00:52:58.018280 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-09-11 00:52:58.018291 | orchestrator | Thursday 11 September 2025 00:51:08 +0000 (0:00:01.369) 0:04:07.175 **** 2025-09-11 00:52:58.018302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-11 00:52:58.018320 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.018331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-11 00:52:58.018342 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.018353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-11 00:52:58.018364 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.018374 | orchestrator | 2025-09-11 00:52:58.018385 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-09-11 00:52:58.018396 | orchestrator | Thursday 11 September 2025 00:51:10 +0000 (0:00:01.294) 0:04:08.470 **** 2025-09-11 00:52:58.018407 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.018418 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.018428 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.018439 | orchestrator | 2025-09-11 00:52:58.018480 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-11 00:52:58.018493 | orchestrator | Thursday 11 September 2025 00:51:11 +0000 (0:00:01.687) 0:04:10.157 **** 2025-09-11 00:52:58.018504 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:52:58.018515 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:52:58.018526 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:52:58.018537 | orchestrator | 2025-09-11 00:52:58.018548 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-11 00:52:58.018559 | orchestrator | Thursday 11 September 2025 00:51:14 +0000 (0:00:02.457) 0:04:12.614 **** 2025-09-11 00:52:58.018569 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:52:58.018580 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:52:58.018591 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:52:58.018601 | orchestrator | 2025-09-11 00:52:58.018612 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-09-11 00:52:58.018623 | orchestrator | Thursday 11 September 2025 00:51:17 +0000 (0:00:03.061) 0:04:15.676 **** 2025-09-11 00:52:58.018634 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-09-11 00:52:58.018645 | orchestrator | 2025-09-11 00:52:58.018656 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-09-11 00:52:58.018666 | orchestrator | Thursday 11 September 2025 00:51:18 +0000 (0:00:00.805) 0:04:16.481 **** 2025-09-11 00:52:58.018678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-11 00:52:58.018689 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.018711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-11 00:52:58.018723 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.018734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-11 00:52:58.018745 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.018756 | orchestrator | 2025-09-11 00:52:58.018766 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-09-11 00:52:58.018777 | orchestrator | Thursday 11 September 2025 00:51:19 +0000 (0:00:01.235) 0:04:17.716 **** 2025-09-11 00:52:58.018788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-11 00:52:58.018800 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.018811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-11 00:52:58.018822 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.018865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-11 00:52:58.018878 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.018889 | orchestrator | 2025-09-11 00:52:58.018899 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-09-11 00:52:58.018910 | orchestrator | Thursday 11 September 2025 00:51:20 +0000 (0:00:01.334) 0:04:19.050 **** 2025-09-11 00:52:58.018921 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.018932 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.018943 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.018953 | orchestrator | 2025-09-11 00:52:58.018964 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-11 00:52:58.018975 | orchestrator | Thursday 11 September 2025 00:51:22 +0000 (0:00:01.484) 0:04:20.535 **** 2025-09-11 00:52:58.018992 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:52:58.019003 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:52:58.019014 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:52:58.019025 | orchestrator | 2025-09-11 00:52:58.019036 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-11 00:52:58.019047 | orchestrator | Thursday 11 September 2025 00:51:24 +0000 (0:00:02.315) 0:04:22.850 **** 2025-09-11 00:52:58.019057 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:52:58.019068 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:52:58.019079 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:52:58.019103 | orchestrator | 2025-09-11 00:52:58.019115 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-09-11 00:52:58.019126 | orchestrator | Thursday 11 September 2025 00:51:27 +0000 (0:00:03.163) 0:04:26.014 **** 2025-09-11 00:52:58.019136 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:52:58.019147 | orchestrator | 2025-09-11 00:52:58.019158 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-09-11 00:52:58.019169 | orchestrator | Thursday 11 September 2025 00:51:29 +0000 (0:00:01.518) 0:04:27.533 **** 2025-09-11 00:52:58.019185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-11 00:52:58.019197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-11 00:52:58.019209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-11 00:52:58.019254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-11 00:52:58.019274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-11 00:52:58.019286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.019301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-11 00:52:58.019313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-11 00:52:58.019324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-11 00:52:58.019365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-11 00:52:58.019384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.019395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-11 00:52:58.019411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-11 00:52:58.019423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-11 00:52:58.019434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.019445 | orchestrator | 2025-09-11 00:52:58.019456 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-09-11 00:52:58.019467 | orchestrator | Thursday 11 September 2025 00:51:32 +0000 (0:00:03.351) 0:04:30.884 **** 2025-09-11 00:52:58.019508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-11 00:52:58.019531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-11 00:52:58.019543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-11 00:52:58.019554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-11 00:52:58.019565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.019577 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.019588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-11 00:52:58.019628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-11 00:52:58.019720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-11 00:52:58.019741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-11 00:52:58.019753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.019764 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.019780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-11 00:52:58.019792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-11 00:52:58.019804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-11 00:52:58.019860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-11 00:52:58.019874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-11 00:52:58.019885 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.019896 | orchestrator | 2025-09-11 00:52:58.019907 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-09-11 00:52:58.019918 | orchestrator | Thursday 11 September 2025 00:51:33 +0000 (0:00:00.686) 0:04:31.570 **** 2025-09-11 00:52:58.019929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-11 00:52:58.019940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-11 00:52:58.019952 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.019962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-11 00:52:58.019978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-11 00:52:58.019989 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.020000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-11 00:52:58.020011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-11 00:52:58.020022 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.020032 | orchestrator | 2025-09-11 00:52:58.020043 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-09-11 00:52:58.020054 | orchestrator | Thursday 11 September 2025 00:51:34 +0000 (0:00:01.419) 0:04:32.990 **** 2025-09-11 00:52:58.020065 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:52:58.020075 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:52:58.020086 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:52:58.020125 | orchestrator | 2025-09-11 00:52:58.020143 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-09-11 00:52:58.020154 | orchestrator | Thursday 11 September 2025 00:51:36 +0000 (0:00:01.600) 0:04:34.591 **** 2025-09-11 00:52:58.020165 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:52:58.020176 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:52:58.020187 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:52:58.020197 | orchestrator | 2025-09-11 00:52:58.020208 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-09-11 00:52:58.020219 | orchestrator | Thursday 11 September 2025 00:51:38 +0000 (0:00:02.147) 0:04:36.738 **** 2025-09-11 00:52:58.020230 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:52:58.020241 | orchestrator | 2025-09-11 00:52:58.020252 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-09-11 00:52:58.020263 | orchestrator | Thursday 11 September 2025 00:51:39 +0000 (0:00:01.339) 0:04:38.077 **** 2025-09-11 00:52:58.020308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-11 00:52:58.020322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-11 00:52:58.020334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-11 00:52:58.020352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-11 00:52:58.020401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-11 00:52:58.020416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-11 00:52:58.020428 | orchestrator | 2025-09-11 00:52:58.020439 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-09-11 00:52:58.020450 | orchestrator | Thursday 11 September 2025 00:51:45 +0000 (0:00:05.295) 0:04:43.373 **** 2025-09-11 00:52:58.020466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-11 00:52:58.020478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-11 00:52:58.020496 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.020508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-11 00:52:58.020550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-11 00:52:58.020563 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.020575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-11 00:52:58.020591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-11 00:52:58.020609 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.020620 | orchestrator | 2025-09-11 00:52:58.020631 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-09-11 00:52:58.020642 | orchestrator | Thursday 11 September 2025 00:51:45 +0000 (0:00:00.566) 0:04:43.939 **** 2025-09-11 00:52:58.020652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-11 00:52:58.020663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-11 00:52:58.020675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-11 00:52:58.020686 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.020697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-11 00:52:58.020737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-11 00:52:58.020751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-11 00:52:58.020762 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.020773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-11 00:52:58.020784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-11 00:52:58.020795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-11 00:52:58.020806 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.020817 | orchestrator | 2025-09-11 00:52:58.020828 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-09-11 00:52:58.020839 | orchestrator | Thursday 11 September 2025 00:51:46 +0000 (0:00:00.812) 0:04:44.752 **** 2025-09-11 00:52:58.020849 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.020860 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.020871 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.020888 | orchestrator | 2025-09-11 00:52:58.020898 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-09-11 00:52:58.020909 | orchestrator | Thursday 11 September 2025 00:51:47 +0000 (0:00:00.785) 0:04:45.538 **** 2025-09-11 00:52:58.020920 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.020931 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.020941 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.020952 | orchestrator | 2025-09-11 00:52:58.020962 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-09-11 00:52:58.020973 | orchestrator | Thursday 11 September 2025 00:51:48 +0000 (0:00:01.286) 0:04:46.825 **** 2025-09-11 00:52:58.020988 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:52:58.020999 | orchestrator | 2025-09-11 00:52:58.021010 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-09-11 00:52:58.021021 | orchestrator | Thursday 11 September 2025 00:51:50 +0000 (0:00:01.450) 0:04:48.275 **** 2025-09-11 00:52:58.021032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-11 00:52:58.021044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-11 00:52:58.021055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:52:58.021138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:52:58.021154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-11 00:52:58.021176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-11 00:52:58.021196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-11 00:52:58.021208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:52:58.021219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:52:58.021231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-11 00:52:58.021272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-11 00:52:58.021284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-11 00:52:58.021302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:52:58.021312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:52:58.021326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-11 00:52:58.021337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-11 00:52:58.021352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-11 00:52:58.021363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:52:58.021378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:52:58.021389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-11 00:52:58.021402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-11 00:52:58.021414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-11 00:52:58.021430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:52:58.021440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:52:58.021456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-11 00:52:58.021466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-11 00:52:58.021480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-11 00:52:58.021490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:52:58.021500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:52:58.021515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-11 00:52:58.021532 | orchestrator | 2025-09-11 00:52:58.021541 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-09-11 00:52:58.021551 | orchestrator | Thursday 11 September 2025 00:51:54 +0000 (0:00:04.408) 0:04:52.684 **** 2025-09-11 00:52:58.021561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-11 00:52:58.021571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-11 00:52:58.021585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:52:58.021596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:52:58.021606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-11 00:52:58.021620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-11 00:52:58.021638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-11 00:52:58.021648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:52:58.021658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:52:58.021671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-11 00:52:58.021681 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.021691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-11 00:52:58.021701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-11 00:52:58.021716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:52:58.021732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:52:58.021742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-11 00:52:58.021756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-11 00:52:58.021767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-11 00:52:58.021777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:52:58.021791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:52:58.021807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-11 00:52:58.021817 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.021827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-11 00:52:58.021836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-11 00:52:58.021850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:52:58.021860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:52:58.021870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-11 00:52:58.021885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-11 00:52:58.021904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-11 00:52:58.021914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:52:58.021928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 00:52:58.021938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-11 00:52:58.021948 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.021957 | orchestrator | 2025-09-11 00:52:58.021967 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-09-11 00:52:58.021977 | orchestrator | Thursday 11 September 2025 00:51:55 +0000 (0:00:01.189) 0:04:53.873 **** 2025-09-11 00:52:58.021987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-11 00:52:58.021997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-11 00:52:58.022012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-11 00:52:58.022062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-11 00:52:58.022073 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.022083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-11 00:52:58.022113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-11 00:52:58.022124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-11 00:52:58.022134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-11 00:52:58.022144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-11 00:52:58.022154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-11 00:52:58.022164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-11 00:52:58.022174 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.022184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-11 00:52:58.022193 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.022203 | orchestrator | 2025-09-11 00:52:58.022217 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-09-11 00:52:58.022227 | orchestrator | Thursday 11 September 2025 00:51:56 +0000 (0:00:00.966) 0:04:54.840 **** 2025-09-11 00:52:58.022237 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.022246 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.022256 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.022265 | orchestrator | 2025-09-11 00:52:58.022275 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-09-11 00:52:58.022284 | orchestrator | Thursday 11 September 2025 00:51:57 +0000 (0:00:00.445) 0:04:55.286 **** 2025-09-11 00:52:58.022294 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.022312 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.022322 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.022331 | orchestrator | 2025-09-11 00:52:58.022341 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-09-11 00:52:58.022350 | orchestrator | Thursday 11 September 2025 00:51:58 +0000 (0:00:01.370) 0:04:56.656 **** 2025-09-11 00:52:58.022360 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:52:58.022372 | orchestrator | 2025-09-11 00:52:58.022382 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-09-11 00:52:58.022391 | orchestrator | Thursday 11 September 2025 00:52:00 +0000 (0:00:01.608) 0:04:58.265 **** 2025-09-11 00:52:58.022401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-11 00:52:58.022418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-11 00:52:58.022430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-11 00:52:58.022440 | orchestrator | 2025-09-11 00:52:58.022450 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-09-11 00:52:58.022459 | orchestrator | Thursday 11 September 2025 00:52:02 +0000 (0:00:02.204) 0:05:00.469 **** 2025-09-11 00:52:58.022473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-11 00:52:58.022490 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.022500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-11 00:52:58.022510 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.022525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-11 00:52:58.022536 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.022545 | orchestrator | 2025-09-11 00:52:58.022555 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-09-11 00:52:58.022564 | orchestrator | Thursday 11 September 2025 00:52:02 +0000 (0:00:00.322) 0:05:00.792 **** 2025-09-11 00:52:58.022574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-11 00:52:58.022583 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.022593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-11 00:52:58.022602 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.022612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-11 00:52:58.022627 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.022636 | orchestrator | 2025-09-11 00:52:58.022646 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-09-11 00:52:58.022655 | orchestrator | Thursday 11 September 2025 00:52:03 +0000 (0:00:00.751) 0:05:01.543 **** 2025-09-11 00:52:58.022665 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.022674 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.022684 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.022693 | orchestrator | 2025-09-11 00:52:58.022703 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-09-11 00:52:58.022716 | orchestrator | Thursday 11 September 2025 00:52:03 +0000 (0:00:00.378) 0:05:01.921 **** 2025-09-11 00:52:58.022726 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.022736 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.022745 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.022755 | orchestrator | 2025-09-11 00:52:58.022764 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-09-11 00:52:58.022774 | orchestrator | Thursday 11 September 2025 00:52:04 +0000 (0:00:01.086) 0:05:03.008 **** 2025-09-11 00:52:58.022783 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:52:58.022793 | orchestrator | 2025-09-11 00:52:58.022803 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-09-11 00:52:58.022812 | orchestrator | Thursday 11 September 2025 00:52:06 +0000 (0:00:01.608) 0:05:04.616 **** 2025-09-11 00:52:58.022822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-11 00:52:58.022838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-11 00:52:58.022849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-11 00:52:58.022868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-11 00:52:58.022879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-11 00:52:58.022890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-11 00:52:58.022899 | orchestrator | 2025-09-11 00:52:58.022913 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-09-11 00:52:58.022923 | orchestrator | Thursday 11 September 2025 00:52:12 +0000 (0:00:05.628) 0:05:10.244 **** 2025-09-11 00:52:58.022934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-11 00:52:58.022949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-11 00:52:58.022959 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.022973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-11 00:52:58.022984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-11 00:52:58.022993 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.023009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-11 00:52:58.023024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-11 00:52:58.023034 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.023044 | orchestrator | 2025-09-11 00:52:58.023053 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-09-11 00:52:58.023063 | orchestrator | Thursday 11 September 2025 00:52:12 +0000 (0:00:00.607) 0:05:10.852 **** 2025-09-11 00:52:58.023077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-11 00:52:58.023087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-11 00:52:58.023113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-11 00:52:58.023123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-11 00:52:58.023133 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.023142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-11 00:52:58.023152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-11 00:52:58.023162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-11 00:52:58.023171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-11 00:52:58.023181 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.023191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-11 00:52:58.023206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-11 00:52:58.023225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-11 00:52:58.023234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-11 00:52:58.023244 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.023254 | orchestrator | 2025-09-11 00:52:58.023263 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-09-11 00:52:58.023273 | orchestrator | Thursday 11 September 2025 00:52:14 +0000 (0:00:01.578) 0:05:12.431 **** 2025-09-11 00:52:58.023283 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:52:58.023292 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:52:58.023302 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:52:58.023311 | orchestrator | 2025-09-11 00:52:58.023321 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-09-11 00:52:58.023330 | orchestrator | Thursday 11 September 2025 00:52:15 +0000 (0:00:01.335) 0:05:13.766 **** 2025-09-11 00:52:58.023340 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:52:58.023349 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:52:58.023359 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:52:58.023368 | orchestrator | 2025-09-11 00:52:58.023378 | orchestrator | TASK [include_role : swift] **************************************************** 2025-09-11 00:52:58.023387 | orchestrator | Thursday 11 September 2025 00:52:17 +0000 (0:00:02.054) 0:05:15.821 **** 2025-09-11 00:52:58.023397 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.023406 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.023416 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.023425 | orchestrator | 2025-09-11 00:52:58.023435 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-09-11 00:52:58.023444 | orchestrator | Thursday 11 September 2025 00:52:17 +0000 (0:00:00.312) 0:05:16.133 **** 2025-09-11 00:52:58.023454 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.023463 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.023473 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.023482 | orchestrator | 2025-09-11 00:52:58.023491 | orchestrator | TASK [include_role : trove] **************************************************** 2025-09-11 00:52:58.023501 | orchestrator | Thursday 11 September 2025 00:52:18 +0000 (0:00:00.309) 0:05:16.443 **** 2025-09-11 00:52:58.023511 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.023520 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.023529 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.023539 | orchestrator | 2025-09-11 00:52:58.023553 | orchestrator | TASK [include_role : venus] **************************************************** 2025-09-11 00:52:58.023563 | orchestrator | Thursday 11 September 2025 00:52:18 +0000 (0:00:00.637) 0:05:17.080 **** 2025-09-11 00:52:58.023572 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.023582 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.023591 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.023600 | orchestrator | 2025-09-11 00:52:58.023610 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-09-11 00:52:58.023620 | orchestrator | Thursday 11 September 2025 00:52:19 +0000 (0:00:00.317) 0:05:17.397 **** 2025-09-11 00:52:58.023629 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.023639 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.023648 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.023658 | orchestrator | 2025-09-11 00:52:58.023667 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-09-11 00:52:58.023677 | orchestrator | Thursday 11 September 2025 00:52:19 +0000 (0:00:00.318) 0:05:17.716 **** 2025-09-11 00:52:58.023686 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.023696 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.023710 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.023719 | orchestrator | 2025-09-11 00:52:58.023729 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-09-11 00:52:58.023739 | orchestrator | Thursday 11 September 2025 00:52:20 +0000 (0:00:00.856) 0:05:18.572 **** 2025-09-11 00:52:58.023748 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:52:58.023758 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:52:58.023767 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:52:58.023777 | orchestrator | 2025-09-11 00:52:58.023786 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-09-11 00:52:58.023796 | orchestrator | Thursday 11 September 2025 00:52:21 +0000 (0:00:00.696) 0:05:19.269 **** 2025-09-11 00:52:58.023805 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:52:58.023815 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:52:58.023824 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:52:58.023834 | orchestrator | 2025-09-11 00:52:58.023843 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-09-11 00:52:58.023853 | orchestrator | Thursday 11 September 2025 00:52:21 +0000 (0:00:00.340) 0:05:19.609 **** 2025-09-11 00:52:58.023863 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:52:58.023872 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:52:58.023881 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:52:58.023891 | orchestrator | 2025-09-11 00:52:58.023900 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-09-11 00:52:58.023910 | orchestrator | Thursday 11 September 2025 00:52:22 +0000 (0:00:00.871) 0:05:20.481 **** 2025-09-11 00:52:58.023919 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:52:58.023929 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:52:58.023938 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:52:58.023947 | orchestrator | 2025-09-11 00:52:58.023957 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-09-11 00:52:58.023966 | orchestrator | Thursday 11 September 2025 00:52:23 +0000 (0:00:01.342) 0:05:21.824 **** 2025-09-11 00:52:58.023976 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:52:58.023985 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:52:58.023999 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:52:58.024009 | orchestrator | 2025-09-11 00:52:58.024019 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-09-11 00:52:58.024029 | orchestrator | Thursday 11 September 2025 00:52:24 +0000 (0:00:00.951) 0:05:22.776 **** 2025-09-11 00:52:58.024038 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:52:58.024048 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:52:58.024057 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:52:58.024067 | orchestrator | 2025-09-11 00:52:58.024076 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-09-11 00:52:58.024086 | orchestrator | Thursday 11 September 2025 00:52:29 +0000 (0:00:04.670) 0:05:27.447 **** 2025-09-11 00:52:58.024137 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:52:58.024147 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:52:58.024157 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:52:58.024166 | orchestrator | 2025-09-11 00:52:58.024176 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-09-11 00:52:58.024186 | orchestrator | Thursday 11 September 2025 00:52:31 +0000 (0:00:02.747) 0:05:30.195 **** 2025-09-11 00:52:58.024195 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:52:58.024205 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:52:58.024214 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:52:58.024224 | orchestrator | 2025-09-11 00:52:58.024233 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-09-11 00:52:58.024243 | orchestrator | Thursday 11 September 2025 00:52:39 +0000 (0:00:07.507) 0:05:37.703 **** 2025-09-11 00:52:58.024252 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:52:58.024262 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:52:58.024271 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:52:58.024281 | orchestrator | 2025-09-11 00:52:58.024290 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-09-11 00:52:58.024305 | orchestrator | Thursday 11 September 2025 00:52:42 +0000 (0:00:03.290) 0:05:40.993 **** 2025-09-11 00:52:58.024315 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:52:58.024325 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:52:58.024334 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:52:58.024343 | orchestrator | 2025-09-11 00:52:58.024353 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-09-11 00:52:58.024362 | orchestrator | Thursday 11 September 2025 00:52:51 +0000 (0:00:09.192) 0:05:50.185 **** 2025-09-11 00:52:58.024372 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.024382 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.024391 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.024400 | orchestrator | 2025-09-11 00:52:58.024410 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-09-11 00:52:58.024419 | orchestrator | Thursday 11 September 2025 00:52:52 +0000 (0:00:00.358) 0:05:50.544 **** 2025-09-11 00:52:58.024429 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.024438 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.024448 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.024457 | orchestrator | 2025-09-11 00:52:58.024467 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-09-11 00:52:58.024480 | orchestrator | Thursday 11 September 2025 00:52:52 +0000 (0:00:00.325) 0:05:50.870 **** 2025-09-11 00:52:58.024490 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.024500 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.024509 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.024518 | orchestrator | 2025-09-11 00:52:58.024528 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-09-11 00:52:58.024537 | orchestrator | Thursday 11 September 2025 00:52:53 +0000 (0:00:00.584) 0:05:51.454 **** 2025-09-11 00:52:58.024547 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.024556 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.024566 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.024575 | orchestrator | 2025-09-11 00:52:58.024585 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-09-11 00:52:58.024594 | orchestrator | Thursday 11 September 2025 00:52:53 +0000 (0:00:00.318) 0:05:51.772 **** 2025-09-11 00:52:58.024604 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.024612 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.024620 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.024628 | orchestrator | 2025-09-11 00:52:58.024636 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-09-11 00:52:58.024643 | orchestrator | Thursday 11 September 2025 00:52:53 +0000 (0:00:00.310) 0:05:52.083 **** 2025-09-11 00:52:58.024651 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:52:58.024659 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:52:58.024667 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:52:58.024674 | orchestrator | 2025-09-11 00:52:58.024682 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-09-11 00:52:58.024690 | orchestrator | Thursday 11 September 2025 00:52:54 +0000 (0:00:00.313) 0:05:52.397 **** 2025-09-11 00:52:58.024698 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:52:58.024706 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:52:58.024713 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:52:58.024721 | orchestrator | 2025-09-11 00:52:58.024729 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-09-11 00:52:58.024737 | orchestrator | Thursday 11 September 2025 00:52:55 +0000 (0:00:01.108) 0:05:53.505 **** 2025-09-11 00:52:58.024744 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:52:58.024752 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:52:58.024760 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:52:58.024767 | orchestrator | 2025-09-11 00:52:58.024775 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:52:58.024787 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-11 00:52:58.024796 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-11 00:52:58.024804 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-11 00:52:58.024812 | orchestrator | 2025-09-11 00:52:58.024820 | orchestrator | 2025-09-11 00:52:58.024831 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:52:58.024840 | orchestrator | Thursday 11 September 2025 00:52:56 +0000 (0:00:00.836) 0:05:54.342 **** 2025-09-11 00:52:58.024847 | orchestrator | =============================================================================== 2025-09-11 00:52:58.024855 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.19s 2025-09-11 00:52:58.024863 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 7.51s 2025-09-11 00:52:58.024871 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 6.13s 2025-09-11 00:52:58.024879 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.63s 2025-09-11 00:52:58.024886 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.30s 2025-09-11 00:52:58.024894 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.92s 2025-09-11 00:52:58.024902 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.67s 2025-09-11 00:52:58.024909 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.45s 2025-09-11 00:52:58.024917 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.41s 2025-09-11 00:52:58.024925 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.36s 2025-09-11 00:52:58.024933 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.14s 2025-09-11 00:52:58.024941 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.10s 2025-09-11 00:52:58.024948 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.08s 2025-09-11 00:52:58.024956 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.98s 2025-09-11 00:52:58.024964 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.97s 2025-09-11 00:52:58.024971 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 3.88s 2025-09-11 00:52:58.024979 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 3.82s 2025-09-11 00:52:58.024987 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.72s 2025-09-11 00:52:58.024994 | orchestrator | loadbalancer : Copying over haproxy start script ------------------------ 3.58s 2025-09-11 00:52:58.025002 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.52s 2025-09-11 00:52:58.025010 | orchestrator | 2025-09-11 00:52:57 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:52:58.025022 | orchestrator | 2025-09-11 00:52:57 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:52:58.025030 | orchestrator | 2025-09-11 00:52:57 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:52:58.025038 | orchestrator | 2025-09-11 00:52:57 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:53:01.032730 | orchestrator | 2025-09-11 00:53:01 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:53:01.032820 | orchestrator | 2025-09-11 00:53:01 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:53:01.032835 | orchestrator | 2025-09-11 00:53:01 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:53:01.032874 | orchestrator | 2025-09-11 00:53:01 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:53:04.058665 | orchestrator | 2025-09-11 00:53:04 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:53:04.059123 | orchestrator | 2025-09-11 00:53:04 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:53:04.060496 | orchestrator | 2025-09-11 00:53:04 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:53:04.060541 | orchestrator | 2025-09-11 00:53:04 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:53:07.094447 | orchestrator | 2025-09-11 00:53:07 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:53:07.094900 | orchestrator | 2025-09-11 00:53:07 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:53:07.095545 | orchestrator | 2025-09-11 00:53:07 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:53:07.095702 | orchestrator | 2025-09-11 00:53:07 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:53:10.146191 | orchestrator | 2025-09-11 00:53:10 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:53:10.146335 | orchestrator | 2025-09-11 00:53:10 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:53:10.146383 | orchestrator | 2025-09-11 00:53:10 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:53:10.146396 | orchestrator | 2025-09-11 00:53:10 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:53:13.152983 | orchestrator | 2025-09-11 00:53:13 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:53:13.155156 | orchestrator | 2025-09-11 00:53:13 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:53:13.156162 | orchestrator | 2025-09-11 00:53:13 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:53:13.156187 | orchestrator | 2025-09-11 00:53:13 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:53:16.184693 | orchestrator | 2025-09-11 00:53:16 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:53:16.185397 | orchestrator | 2025-09-11 00:53:16 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:53:16.185745 | orchestrator | 2025-09-11 00:53:16 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:53:16.185758 | orchestrator | 2025-09-11 00:53:16 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:53:19.219281 | orchestrator | 2025-09-11 00:53:19 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:53:19.219360 | orchestrator | 2025-09-11 00:53:19 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:53:19.219965 | orchestrator | 2025-09-11 00:53:19 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:53:19.219985 | orchestrator | 2025-09-11 00:53:19 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:53:22.248135 | orchestrator | 2025-09-11 00:53:22 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:53:22.249030 | orchestrator | 2025-09-11 00:53:22 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:53:22.250976 | orchestrator | 2025-09-11 00:53:22 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:53:22.250992 | orchestrator | 2025-09-11 00:53:22 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:53:25.282530 | orchestrator | 2025-09-11 00:53:25 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:53:25.283260 | orchestrator | 2025-09-11 00:53:25 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:53:25.284718 | orchestrator | 2025-09-11 00:53:25 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:53:25.285146 | orchestrator | 2025-09-11 00:53:25 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:53:28.327389 | orchestrator | 2025-09-11 00:53:28 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:53:28.329144 | orchestrator | 2025-09-11 00:53:28 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:53:28.330951 | orchestrator | 2025-09-11 00:53:28 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:53:28.330976 | orchestrator | 2025-09-11 00:53:28 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:53:31.367016 | orchestrator | 2025-09-11 00:53:31 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:53:31.368388 | orchestrator | 2025-09-11 00:53:31 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:53:31.369910 | orchestrator | 2025-09-11 00:53:31 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:53:31.370275 | orchestrator | 2025-09-11 00:53:31 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:53:34.410744 | orchestrator | 2025-09-11 00:53:34 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:53:34.412182 | orchestrator | 2025-09-11 00:53:34 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:53:34.413873 | orchestrator | 2025-09-11 00:53:34 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:53:34.413900 | orchestrator | 2025-09-11 00:53:34 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:53:37.466599 | orchestrator | 2025-09-11 00:53:37 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:53:37.468257 | orchestrator | 2025-09-11 00:53:37 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:53:37.469942 | orchestrator | 2025-09-11 00:53:37 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:53:37.470126 | orchestrator | 2025-09-11 00:53:37 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:53:40.507558 | orchestrator | 2025-09-11 00:53:40 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:53:40.507874 | orchestrator | 2025-09-11 00:53:40 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:53:40.508856 | orchestrator | 2025-09-11 00:53:40 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:53:40.509123 | orchestrator | 2025-09-11 00:53:40 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:53:43.548064 | orchestrator | 2025-09-11 00:53:43 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:53:43.548497 | orchestrator | 2025-09-11 00:53:43 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:53:43.551128 | orchestrator | 2025-09-11 00:53:43 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:53:43.551246 | orchestrator | 2025-09-11 00:53:43 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:53:46.595607 | orchestrator | 2025-09-11 00:53:46 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:53:46.598131 | orchestrator | 2025-09-11 00:53:46 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:53:46.600994 | orchestrator | 2025-09-11 00:53:46 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:53:46.601005 | orchestrator | 2025-09-11 00:53:46 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:53:49.644438 | orchestrator | 2025-09-11 00:53:49 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:53:49.646336 | orchestrator | 2025-09-11 00:53:49 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:53:49.648676 | orchestrator | 2025-09-11 00:53:49 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:53:49.648708 | orchestrator | 2025-09-11 00:53:49 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:53:52.691728 | orchestrator | 2025-09-11 00:53:52 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:53:52.692499 | orchestrator | 2025-09-11 00:53:52 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:53:52.694091 | orchestrator | 2025-09-11 00:53:52 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:53:52.694230 | orchestrator | 2025-09-11 00:53:52 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:53:55.733552 | orchestrator | 2025-09-11 00:53:55 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:53:55.734185 | orchestrator | 2025-09-11 00:53:55 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:53:55.735558 | orchestrator | 2025-09-11 00:53:55 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:53:55.735582 | orchestrator | 2025-09-11 00:53:55 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:53:58.775370 | orchestrator | 2025-09-11 00:53:58 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:53:58.777797 | orchestrator | 2025-09-11 00:53:58 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:53:58.779699 | orchestrator | 2025-09-11 00:53:58 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:53:58.779728 | orchestrator | 2025-09-11 00:53:58 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:54:01.835874 | orchestrator | 2025-09-11 00:54:01 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:54:01.837982 | orchestrator | 2025-09-11 00:54:01 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:54:01.841262 | orchestrator | 2025-09-11 00:54:01 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:54:01.841355 | orchestrator | 2025-09-11 00:54:01 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:54:04.878446 | orchestrator | 2025-09-11 00:54:04 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:54:04.878793 | orchestrator | 2025-09-11 00:54:04 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:54:04.879746 | orchestrator | 2025-09-11 00:54:04 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:54:04.879770 | orchestrator | 2025-09-11 00:54:04 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:54:07.936709 | orchestrator | 2025-09-11 00:54:07 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:54:07.939936 | orchestrator | 2025-09-11 00:54:07 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:54:07.940217 | orchestrator | 2025-09-11 00:54:07 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:54:07.940243 | orchestrator | 2025-09-11 00:54:07 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:54:10.987275 | orchestrator | 2025-09-11 00:54:10 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:54:10.988601 | orchestrator | 2025-09-11 00:54:10 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:54:10.990201 | orchestrator | 2025-09-11 00:54:10 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:54:10.990726 | orchestrator | 2025-09-11 00:54:10 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:54:14.037257 | orchestrator | 2025-09-11 00:54:14 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:54:14.038631 | orchestrator | 2025-09-11 00:54:14 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:54:14.040801 | orchestrator | 2025-09-11 00:54:14 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:54:14.041080 | orchestrator | 2025-09-11 00:54:14 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:54:17.076137 | orchestrator | 2025-09-11 00:54:17 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:54:17.076701 | orchestrator | 2025-09-11 00:54:17 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:54:17.078117 | orchestrator | 2025-09-11 00:54:17 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:54:17.078154 | orchestrator | 2025-09-11 00:54:17 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:54:20.121353 | orchestrator | 2025-09-11 00:54:20 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:54:20.123018 | orchestrator | 2025-09-11 00:54:20 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:54:20.124376 | orchestrator | 2025-09-11 00:54:20 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:54:20.124402 | orchestrator | 2025-09-11 00:54:20 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:54:23.182307 | orchestrator | 2025-09-11 00:54:23 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:54:23.183214 | orchestrator | 2025-09-11 00:54:23 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:54:23.185080 | orchestrator | 2025-09-11 00:54:23 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:54:23.185155 | orchestrator | 2025-09-11 00:54:23 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:54:26.224751 | orchestrator | 2025-09-11 00:54:26 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:54:26.226946 | orchestrator | 2025-09-11 00:54:26 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:54:26.229918 | orchestrator | 2025-09-11 00:54:26 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:54:26.229950 | orchestrator | 2025-09-11 00:54:26 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:54:29.273664 | orchestrator | 2025-09-11 00:54:29 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:54:29.275459 | orchestrator | 2025-09-11 00:54:29 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:54:29.277244 | orchestrator | 2025-09-11 00:54:29 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:54:29.277279 | orchestrator | 2025-09-11 00:54:29 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:54:32.325054 | orchestrator | 2025-09-11 00:54:32 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:54:32.325311 | orchestrator | 2025-09-11 00:54:32 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:54:32.326330 | orchestrator | 2025-09-11 00:54:32 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:54:32.326360 | orchestrator | 2025-09-11 00:54:32 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:54:35.373978 | orchestrator | 2025-09-11 00:54:35 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:54:35.374515 | orchestrator | 2025-09-11 00:54:35 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:54:35.376058 | orchestrator | 2025-09-11 00:54:35 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:54:35.376084 | orchestrator | 2025-09-11 00:54:35 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:54:38.424040 | orchestrator | 2025-09-11 00:54:38 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:54:38.425923 | orchestrator | 2025-09-11 00:54:38 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:54:38.429539 | orchestrator | 2025-09-11 00:54:38 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:54:38.430003 | orchestrator | 2025-09-11 00:54:38 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:54:41.471523 | orchestrator | 2025-09-11 00:54:41 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:54:41.472636 | orchestrator | 2025-09-11 00:54:41 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:54:41.474287 | orchestrator | 2025-09-11 00:54:41 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:54:41.474312 | orchestrator | 2025-09-11 00:54:41 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:54:44.519669 | orchestrator | 2025-09-11 00:54:44 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:54:44.521072 | orchestrator | 2025-09-11 00:54:44 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:54:44.522888 | orchestrator | 2025-09-11 00:54:44 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:54:44.522921 | orchestrator | 2025-09-11 00:54:44 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:54:47.566537 | orchestrator | 2025-09-11 00:54:47 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:54:47.569690 | orchestrator | 2025-09-11 00:54:47 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:54:47.570998 | orchestrator | 2025-09-11 00:54:47 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:54:47.571034 | orchestrator | 2025-09-11 00:54:47 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:54:50.609086 | orchestrator | 2025-09-11 00:54:50 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:54:50.611093 | orchestrator | 2025-09-11 00:54:50 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:54:50.613227 | orchestrator | 2025-09-11 00:54:50 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:54:50.613501 | orchestrator | 2025-09-11 00:54:50 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:54:53.655734 | orchestrator | 2025-09-11 00:54:53 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:54:53.656886 | orchestrator | 2025-09-11 00:54:53 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:54:53.658559 | orchestrator | 2025-09-11 00:54:53 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:54:53.658782 | orchestrator | 2025-09-11 00:54:53 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:54:56.707531 | orchestrator | 2025-09-11 00:54:56 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:54:56.708595 | orchestrator | 2025-09-11 00:54:56 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:54:56.710317 | orchestrator | 2025-09-11 00:54:56 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:54:56.710477 | orchestrator | 2025-09-11 00:54:56 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:54:59.752642 | orchestrator | 2025-09-11 00:54:59 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state STARTED 2025-09-11 00:54:59.753198 | orchestrator | 2025-09-11 00:54:59 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:54:59.754850 | orchestrator | 2025-09-11 00:54:59 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:54:59.755089 | orchestrator | 2025-09-11 00:54:59 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:55:02.808751 | orchestrator | 2025-09-11 00:55:02.808907 | orchestrator | 2025-09-11 00:55:02.808963 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-09-11 00:55:02.808980 | orchestrator | 2025-09-11 00:55:02.808991 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-11 00:55:02.809003 | orchestrator | Thursday 11 September 2025 00:44:35 +0000 (0:00:00.813) 0:00:00.813 **** 2025-09-11 00:55:02.809015 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:55:02.809027 | orchestrator | 2025-09-11 00:55:02.809100 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-11 00:55:02.809261 | orchestrator | Thursday 11 September 2025 00:44:36 +0000 (0:00:01.256) 0:00:02.070 **** 2025-09-11 00:55:02.809278 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.809292 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.809304 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.809317 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.809330 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.809343 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.809356 | orchestrator | 2025-09-11 00:55:02.809368 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-11 00:55:02.809412 | orchestrator | Thursday 11 September 2025 00:44:38 +0000 (0:00:01.631) 0:00:03.701 **** 2025-09-11 00:55:02.809426 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.809440 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.809452 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.809515 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.809528 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.809540 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.809553 | orchestrator | 2025-09-11 00:55:02.809566 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-11 00:55:02.809579 | orchestrator | Thursday 11 September 2025 00:44:39 +0000 (0:00:00.743) 0:00:04.445 **** 2025-09-11 00:55:02.809618 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.809630 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.809671 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.809717 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.809728 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.809738 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.809749 | orchestrator | 2025-09-11 00:55:02.809760 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-11 00:55:02.809771 | orchestrator | Thursday 11 September 2025 00:44:40 +0000 (0:00:00.897) 0:00:05.343 **** 2025-09-11 00:55:02.809782 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.809792 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.809803 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.809813 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.809824 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.809834 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.809845 | orchestrator | 2025-09-11 00:55:02.809856 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-11 00:55:02.809883 | orchestrator | Thursday 11 September 2025 00:44:40 +0000 (0:00:00.627) 0:00:05.970 **** 2025-09-11 00:55:02.809895 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.809905 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.809916 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.809926 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.809937 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.809948 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.809958 | orchestrator | 2025-09-11 00:55:02.809969 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-11 00:55:02.809980 | orchestrator | Thursday 11 September 2025 00:44:41 +0000 (0:00:00.501) 0:00:06.472 **** 2025-09-11 00:55:02.809991 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.810369 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.810382 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.810393 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.810403 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.810481 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.810493 | orchestrator | 2025-09-11 00:55:02.810504 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-11 00:55:02.810516 | orchestrator | Thursday 11 September 2025 00:44:42 +0000 (0:00:00.996) 0:00:07.468 **** 2025-09-11 00:55:02.810528 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.810539 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.810550 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.810561 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.810598 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.810609 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.810620 | orchestrator | 2025-09-11 00:55:02.810662 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-11 00:55:02.810675 | orchestrator | Thursday 11 September 2025 00:44:43 +0000 (0:00:00.823) 0:00:08.292 **** 2025-09-11 00:55:02.810685 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.810696 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.810707 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.810718 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.810728 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.810739 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.810750 | orchestrator | 2025-09-11 00:55:02.810761 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-11 00:55:02.810772 | orchestrator | Thursday 11 September 2025 00:44:44 +0000 (0:00:01.196) 0:00:09.488 **** 2025-09-11 00:55:02.810783 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-11 00:55:02.810794 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-11 00:55:02.810804 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-11 00:55:02.810828 | orchestrator | 2025-09-11 00:55:02.810918 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-11 00:55:02.810989 | orchestrator | Thursday 11 September 2025 00:44:45 +0000 (0:00:00.791) 0:00:10.279 **** 2025-09-11 00:55:02.811001 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.811013 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.811023 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.811034 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.811074 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.811085 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.811096 | orchestrator | 2025-09-11 00:55:02.811148 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-11 00:55:02.811161 | orchestrator | Thursday 11 September 2025 00:44:46 +0000 (0:00:01.017) 0:00:11.296 **** 2025-09-11 00:55:02.811172 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-11 00:55:02.811183 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-11 00:55:02.811194 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-11 00:55:02.811204 | orchestrator | 2025-09-11 00:55:02.811215 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-11 00:55:02.811225 | orchestrator | Thursday 11 September 2025 00:44:49 +0000 (0:00:03.086) 0:00:14.383 **** 2025-09-11 00:55:02.811236 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-11 00:55:02.811247 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-11 00:55:02.811258 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-11 00:55:02.811268 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.811279 | orchestrator | 2025-09-11 00:55:02.811290 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-11 00:55:02.811300 | orchestrator | Thursday 11 September 2025 00:44:49 +0000 (0:00:00.762) 0:00:15.145 **** 2025-09-11 00:55:02.811313 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.811327 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.811339 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.811349 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.811496 | orchestrator | 2025-09-11 00:55:02.811511 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-11 00:55:02.811522 | orchestrator | Thursday 11 September 2025 00:44:51 +0000 (0:00:01.391) 0:00:16.537 **** 2025-09-11 00:55:02.811542 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.811598 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.811620 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.811631 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.811642 | orchestrator | 2025-09-11 00:55:02.811652 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-11 00:55:02.811663 | orchestrator | Thursday 11 September 2025 00:44:51 +0000 (0:00:00.111) 0:00:16.648 **** 2025-09-11 00:55:02.811685 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-11 00:44:46.684713', 'end': '2025-09-11 00:44:46.956090', 'delta': '0:00:00.271377', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.811700 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-11 00:44:47.408252', 'end': '2025-09-11 00:44:47.692846', 'delta': '0:00:00.284594', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.811711 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-11 00:44:48.476366', 'end': '2025-09-11 00:44:48.791232', 'delta': '0:00:00.314866', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.811722 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.811733 | orchestrator | 2025-09-11 00:55:02.811744 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-11 00:55:02.811755 | orchestrator | Thursday 11 September 2025 00:44:51 +0000 (0:00:00.362) 0:00:17.011 **** 2025-09-11 00:55:02.811765 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.811776 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.811787 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.811798 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.811808 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.811819 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.811830 | orchestrator | 2025-09-11 00:55:02.811840 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-11 00:55:02.811851 | orchestrator | Thursday 11 September 2025 00:44:54 +0000 (0:00:02.669) 0:00:19.680 **** 2025-09-11 00:55:02.811862 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-11 00:55:02.811899 | orchestrator | 2025-09-11 00:55:02.811911 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-11 00:55:02.811922 | orchestrator | Thursday 11 September 2025 00:44:55 +0000 (0:00:00.928) 0:00:20.609 **** 2025-09-11 00:55:02.811933 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.811943 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.811954 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.811965 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.811975 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.811986 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.811997 | orchestrator | 2025-09-11 00:55:02.812007 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-11 00:55:02.812018 | orchestrator | Thursday 11 September 2025 00:44:56 +0000 (0:00:01.009) 0:00:21.619 **** 2025-09-11 00:55:02.812029 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.812039 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.812050 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.812061 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.812071 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.812082 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.812092 | orchestrator | 2025-09-11 00:55:02.812103 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-11 00:55:02.812143 | orchestrator | Thursday 11 September 2025 00:44:57 +0000 (0:00:01.019) 0:00:22.638 **** 2025-09-11 00:55:02.812154 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.812165 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.812176 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.812186 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.812197 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.812207 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.812218 | orchestrator | 2025-09-11 00:55:02.812229 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-11 00:55:02.812239 | orchestrator | Thursday 11 September 2025 00:44:58 +0000 (0:00:00.901) 0:00:23.540 **** 2025-09-11 00:55:02.812250 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.812261 | orchestrator | 2025-09-11 00:55:02.812271 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-11 00:55:02.812282 | orchestrator | Thursday 11 September 2025 00:44:58 +0000 (0:00:00.122) 0:00:23.662 **** 2025-09-11 00:55:02.812307 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.812318 | orchestrator | 2025-09-11 00:55:02.812329 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-11 00:55:02.812340 | orchestrator | Thursday 11 September 2025 00:44:58 +0000 (0:00:00.235) 0:00:23.898 **** 2025-09-11 00:55:02.812351 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.812361 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.812372 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.812491 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.812517 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.812528 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.812538 | orchestrator | 2025-09-11 00:55:02.812559 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-11 00:55:02.812570 | orchestrator | Thursday 11 September 2025 00:44:59 +0000 (0:00:00.556) 0:00:24.454 **** 2025-09-11 00:55:02.812581 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.812591 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.812602 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.812612 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.812623 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.812634 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.812644 | orchestrator | 2025-09-11 00:55:02.812655 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-11 00:55:02.812665 | orchestrator | Thursday 11 September 2025 00:45:00 +0000 (0:00:00.807) 0:00:25.261 **** 2025-09-11 00:55:02.812685 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.812696 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.812706 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.812717 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.812727 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.812751 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.812762 | orchestrator | 2025-09-11 00:55:02.812773 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-11 00:55:02.812784 | orchestrator | Thursday 11 September 2025 00:45:00 +0000 (0:00:00.721) 0:00:25.983 **** 2025-09-11 00:55:02.812794 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.812805 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.812816 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.812826 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.812837 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.812847 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.812858 | orchestrator | 2025-09-11 00:55:02.812869 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-11 00:55:02.812880 | orchestrator | Thursday 11 September 2025 00:45:01 +0000 (0:00:00.743) 0:00:26.727 **** 2025-09-11 00:55:02.812891 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.812902 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.812912 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.812923 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.812934 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.812945 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.812955 | orchestrator | 2025-09-11 00:55:02.812966 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-11 00:55:02.812977 | orchestrator | Thursday 11 September 2025 00:45:01 +0000 (0:00:00.496) 0:00:27.223 **** 2025-09-11 00:55:02.812987 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.812998 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.813008 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.813019 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.813030 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.813040 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.813051 | orchestrator | 2025-09-11 00:55:02.813067 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-11 00:55:02.813078 | orchestrator | Thursday 11 September 2025 00:45:02 +0000 (0:00:00.619) 0:00:27.843 **** 2025-09-11 00:55:02.813089 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.813100 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.813137 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.813157 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.813176 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.813194 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.813206 | orchestrator | 2025-09-11 00:55:02.813217 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-11 00:55:02.813228 | orchestrator | Thursday 11 September 2025 00:45:03 +0000 (0:00:00.704) 0:00:28.547 **** 2025-09-11 00:55:02.813241 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7f9f8cff--4bc3--57f6--8883--7f2afe56eba7-osd--block--7f9f8cff--4bc3--57f6--8883--7f2afe56eba7', 'dm-uuid-LVM-FFRmGyjMJjwyBNPX0mtgwKT08Ec5j8nKiYegaTkdjBNxanDroHG5paqF8aLIOfpq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.813254 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0befa402--ebd4--5a4e--889f--8c71805f12b9-osd--block--0befa402--ebd4--5a4e--889f--8c71805f12b9', 'dm-uuid-LVM-O83ZxIZ2HKgtn3sHkfxrsbtDwkQwlfAEliHJePbC1pFTz2a2NqegeiNoBPqKriB7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.813284 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.813297 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.813309 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.813320 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.813331 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.813349 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.813360 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.813371 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.813403 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c', 'scsi-SQEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c-part1', 'scsi-SQEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c-part14', 'scsi-SQEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c-part15', 'scsi-SQEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c-part16', 'scsi-SQEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-11 00:55:02.813418 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7f9f8cff--4bc3--57f6--8883--7f2afe56eba7-osd--block--7f9f8cff--4bc3--57f6--8883--7f2afe56eba7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9brBhl-9rD0-lF3D-tvrO-c62Q-KEhm-mK5VDE', 'scsi-0QEMU_QEMU_HARDDISK_64a40e05-4c55-4984-8320-b8e17729d0c1', 'scsi-SQEMU_QEMU_HARDDISK_64a40e05-4c55-4984-8320-b8e17729d0c1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-11 00:55:02.813435 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--0befa402--ebd4--5a4e--889f--8c71805f12b9-osd--block--0befa402--ebd4--5a4e--889f--8c71805f12b9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VoFX0M-o1D9-F2dj-bYbg-UG6C-HSx3-9gasFd', 'scsi-0QEMU_QEMU_HARDDISK_a8a1b225-42a0-4e26-b86d-f2993393243d', 'scsi-SQEMU_QEMU_HARDDISK_a8a1b225-42a0-4e26-b86d-f2993393243d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-11 00:55:02.813447 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816fbb7-1f19-4e1f-8cd2-a94f4cca07a1', 'scsi-SQEMU_QEMU_HARDDISK_8816fbb7-1f19-4e1f-8cd2-a94f4cca07a1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-11 00:55:02.813465 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--344fe78f--9b90--543d--a55e--ac4ca1a09e29-osd--block--344fe78f--9b90--543d--a55e--ac4ca1a09e29', 'dm-uuid-LVM-IcFdCfbs0J7lgVCaqy5mU1XDVu6CMknWjfIYlTGUrKo82NMO30nTpFLtBtTp4JTM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.813490 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-11-00-01-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-11 00:55:02.813502 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4b4178b7--2f3b--5f27--b2b6--7c3306310ac2-osd--block--4b4178b7--2f3b--5f27--b2b6--7c3306310ac2', 'dm-uuid-LVM-MSkNI7CPgw2rIqMkS0ULAlD4N133FmUTgRMv5M7TWonhKc6ByYfwxwZuP8Jgb8yR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.813514 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.813525 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.813541 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.813553 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.813564 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.813582 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.813593 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.813604 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.813623 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.813640 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e', 'scsi-SQEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e-part1', 'scsi-SQEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e-part14', 'scsi-SQEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e-part15', 'scsi-SQEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e-part16', 'scsi-SQEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-11 00:55:02.813654 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--344fe78f--9b90--543d--a55e--ac4ca1a09e29-osd--block--344fe78f--9b90--543d--a55e--ac4ca1a09e29'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-B7gVWm-Wojy-a5dV-L0Tu-SnNR-y9ze-ex6u97', 'scsi-0QEMU_QEMU_HARDDISK_2cf4645b-2040-4422-b411-f526d3d4b2d7', 'scsi-SQEMU_QEMU_HARDDISK_2cf4645b-2040-4422-b411-f526d3d4b2d7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-11 00:55:02.813673 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--4b4178b7--2f3b--5f27--b2b6--7c3306310ac2-osd--block--4b4178b7--2f3b--5f27--b2b6--7c3306310ac2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Hov3a8-9iDi-As7D-PKbZ-qStP-xNc3-IUTecg', 'scsi-0QEMU_QEMU_HARDDISK_8046510b-ad40-4feb-b71a-a7eb3fa57256', 'scsi-SQEMU_QEMU_HARDDISK_8046510b-ad40-4feb-b71a-a7eb3fa57256'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-11 00:55:02.813692 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1fcfbff8--db79--5f3f--a505--ec8e716f38d6-osd--block--1fcfbff8--db79--5f3f--a505--ec8e716f38d6', 'dm-uuid-LVM-BUfVwgnon6sZxiupdPkh7tHhQxfkU9wrcQv6EDIHaeS4TSVQjVRY6qHh53bV7eGO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.813704 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4713353-19f0-445a-bb8a-6a961d38a233', 'scsi-SQEMU_QEMU_HARDDISK_d4713353-19f0-445a-bb8a-6a961d38a233'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-11 00:55:02.813715 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8a3e2512--7b8b--5f78--845d--17a09314c972-osd--block--8a3e2512--7b8b--5f78--845d--17a09314c972', 'dm-uuid-LVM-EVgkE7S10cRafvZbRO9DwQh3tt2BT98I9ULTHcbZnJGBSXwTw3BzEjDQvaVxVDdW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.813732 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-11-00-01-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-11 00:55:02.813744 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.813763 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.813774 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.813786 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.813797 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.813814 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.813825 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.813836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.813848 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.813859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.813878 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.813896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.813907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.813927 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc', 'scsi-SQEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc-part1', 'scsi-SQEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc-part14', 'scsi-SQEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc-part15', 'scsi-SQEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc-part16', 'scsi-SQEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-11 00:55:02.813941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.813957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.813975 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--1fcfbff8--db79--5f3f--a505--ec8e716f38d6-osd--block--1fcfbff8--db79--5f3f--a505--ec8e716f38d6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-A6YIqM-Tf5f-HngE-xa6U-QuYg-PMeg-ro81Ui', 'scsi-0QEMU_QEMU_HARDDISK_f923a2d7-e50a-4a10-a63c-46b2772477f3', 'scsi-SQEMU_QEMU_HARDDISK_f923a2d7-e50a-4a10-a63c-46b2772477f3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-11 00:55:02.813987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.813998 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8a3e2512--7b8b--5f78--845d--17a09314c972-osd--block--8a3e2512--7b8b--5f78--845d--17a09314c972'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1OzxHd-LuzN-1Wex-64m7-1qYQ-8vcr-kBg1JM', 'scsi-0QEMU_QEMU_HARDDISK_75033e65-6f8e-4260-8d0b-0f414b2e283a', 'scsi-SQEMU_QEMU_HARDDISK_75033e65-6f8e-4260-8d0b-0f414b2e283a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-11 00:55:02.814076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.814092 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e2337cd-4ca4-43ee-9815-6c22aae7aa7a', 'scsi-SQEMU_QEMU_HARDDISK_7e2337cd-4ca4-43ee-9815-6c22aae7aa7a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-11 00:55:02.814137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6b39b37-1573-4813-a204-b3511a0e9470', 'scsi-SQEMU_QEMU_HARDDISK_c6b39b37-1573-4813-a204-b3511a0e9470'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6b39b37-1573-4813-a204-b3511a0e9470-part1', 'scsi-SQEMU_QEMU_HARDDISK_c6b39b37-1573-4813-a204-b3511a0e9470-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6b39b37-1573-4813-a204-b3511a0e9470-part14', 'scsi-SQEMU_QEMU_HARDDISK_c6b39b37-1573-4813-a204-b3511a0e9470-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6b39b37-1573-4813-a204-b3511a0e9470-part15', 'scsi-SQEMU_QEMU_HARDDISK_c6b39b37-1573-4813-a204-b3511a0e9470-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6b39b37-1573-4813-a204-b3511a0e9470-part16', 'scsi-SQEMU_QEMU_HARDDISK_c6b39b37-1573-4813-a204-b3511a0e9470-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-11 00:55:02.814160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-11-00-01-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-11 00:55:02.814180 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-11-00-01-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-11 00:55:02.814192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.814204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.814215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.814226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.814248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.814260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.814271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.814282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.814302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb39152d-f0f1-4dbf-b4d6-619450119bfd', 'scsi-SQEMU_QEMU_HARDDISK_fb39152d-f0f1-4dbf-b4d6-619450119bfd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb39152d-f0f1-4dbf-b4d6-619450119bfd-part1', 'scsi-SQEMU_QEMU_HARDDISK_fb39152d-f0f1-4dbf-b4d6-619450119bfd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb39152d-f0f1-4dbf-b4d6-619450119bfd-part14', 'scsi-SQEMU_QEMU_HARDDISK_fb39152d-f0f1-4dbf-b4d6-619450119bfd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb39152d-f0f1-4dbf-b4d6-619450119bfd-part15', 'scsi-SQEMU_QEMU_HARDDISK_fb39152d-f0f1-4dbf-b4d6-619450119bfd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb39152d-f0f1-4dbf-b4d6-619450119bfd-part16', 'scsi-SQEMU_QEMU_HARDDISK_fb39152d-f0f1-4dbf-b4d6-619450119bfd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-11 00:55:02.814325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-11-00-01-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-11 00:55:02.814342 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.814360 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.814378 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.814396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.814415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.814434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.814453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.814493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.814514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.814531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.814552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:55:02.814570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb43bbb0-6966-49ea-aa1a-91c534974a2c', 'scsi-SQEMU_QEMU_HARDDISK_bb43bbb0-6966-49ea-aa1a-91c534974a2c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb43bbb0-6966-49ea-aa1a-91c534974a2c-part1', 'scsi-SQEMU_QEMU_HARDDISK_bb43bbb0-6966-49ea-aa1a-91c534974a2c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb43bbb0-6966-49ea-aa1a-91c534974a2c-part14', 'scsi-SQEMU_QEMU_HARDDISK_bb43bbb0-6966-49ea-aa1a-91c534974a2c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb43bbb0-6966-49ea-aa1a-91c534974a2c-part15', 'scsi-SQEMU_QEMU_HARDDISK_bb43bbb0-6966-49ea-aa1a-91c534974a2c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb43bbb0-6966-49ea-aa1a-91c534974a2c-part16', 'scsi-SQEMU_QEMU_HARDDISK_bb43bbb0-6966-49ea-aa1a-91c534974a2c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-11 00:55:02.814601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-11-00-01-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-11 00:55:02.814620 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.814637 | orchestrator | 2025-09-11 00:55:02.814655 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-11 00:55:02.814675 | orchestrator | Thursday 11 September 2025 00:45:04 +0000 (0:00:01.659) 0:00:30.207 **** 2025-09-11 00:55:02.814694 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7f9f8cff--4bc3--57f6--8883--7f2afe56eba7-osd--block--7f9f8cff--4bc3--57f6--8883--7f2afe56eba7', 'dm-uuid-LVM-FFRmGyjMJjwyBNPX0mtgwKT08Ec5j8nKiYegaTkdjBNxanDroHG5paqF8aLIOfpq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.814774 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0befa402--ebd4--5a4e--889f--8c71805f12b9-osd--block--0befa402--ebd4--5a4e--889f--8c71805f12b9', 'dm-uuid-LVM-O83ZxIZ2HKgtn3sHkfxrsbtDwkQwlfAEliHJePbC1pFTz2a2NqegeiNoBPqKriB7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.814793 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.814805 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.814817 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.814835 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.814847 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.814867 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.814887 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--344fe78f--9b90--543d--a55e--ac4ca1a09e29-osd--block--344fe78f--9b90--543d--a55e--ac4ca1a09e29', 'dm-uuid-LVM-IcFdCfbs0J7lgVCaqy5mU1XDVu6CMknWjfIYlTGUrKo82NMO30nTpFLtBtTp4JTM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.814898 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.814910 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4b4178b7--2f3b--5f27--b2b6--7c3306310ac2-osd--block--4b4178b7--2f3b--5f27--b2b6--7c3306310ac2', 'dm-uuid-LVM-MSkNI7CPgw2rIqMkS0ULAlD4N133FmUTgRMv5M7TWonhKc6ByYfwxwZuP8Jgb8yR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.814927 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.814939 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1fcfbff8--db79--5f3f--a505--ec8e716f38d6-osd--block--1fcfbff8--db79--5f3f--a505--ec8e716f38d6', 'dm-uuid-LVM-BUfVwgnon6sZxiupdPkh7tHhQxfkU9wrcQv6EDIHaeS4TSVQjVRY6qHh53bV7eGO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.814957 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.814973 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.814985 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8a3e2512--7b8b--5f78--845d--17a09314c972-osd--block--8a3e2512--7b8b--5f78--845d--17a09314c972', 'dm-uuid-LVM-EVgkE7S10cRafvZbRO9DwQh3tt2BT98I9ULTHcbZnJGBSXwTw3BzEjDQvaVxVDdW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815005 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c', 'scsi-SQEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c-part1', 'scsi-SQEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c-part14', 'scsi-SQEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c-part15', 'scsi-SQEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c-part16', 'scsi-SQEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815024 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815040 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815052 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815064 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--7f9f8cff--4bc3--57f6--8883--7f2afe56eba7-osd--block--7f9f8cff--4bc3--57f6--8883--7f2afe56eba7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9brBhl-9rD0-lF3D-tvrO-c62Q-KEhm-mK5VDE', 'scsi-0QEMU_QEMU_HARDDISK_64a40e05-4c55-4984-8320-b8e17729d0c1', 'scsi-SQEMU_QEMU_HARDDISK_64a40e05-4c55-4984-8320-b8e17729d0c1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815082 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815101 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815171 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815189 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815201 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--0befa402--ebd4--5a4e--889f--8c71805f12b9-osd--block--0befa402--ebd4--5a4e--889f--8c71805f12b9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VoFX0M-o1D9-F2dj-bYbg-UG6C-HSx3-9gasFd', 'scsi-0QEMU_QEMU_HARDDISK_a8a1b225-42a0-4e26-b86d-f2993393243d', 'scsi-SQEMU_QEMU_HARDDISK_a8a1b225-42a0-4e26-b86d-f2993393243d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815213 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815232 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815251 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816fbb7-1f19-4e1f-8cd2-a94f4cca07a1', 'scsi-SQEMU_QEMU_HARDDISK_8816fbb7-1f19-4e1f-8cd2-a94f4cca07a1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815262 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815278 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815298 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e', 'scsi-SQEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e-part1', 'scsi-SQEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e-part14', 'scsi-SQEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e-part15', 'scsi-SQEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e-part16', 'scsi-SQEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815318 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--344fe78f--9b90--543d--a55e--ac4ca1a09e29-osd--block--344fe78f--9b90--543d--a55e--ac4ca1a09e29'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-B7gVWm-Wojy-a5dV-L0Tu-SnNR-y9ze-ex6u97', 'scsi-0QEMU_QEMU_HARDDISK_2cf4645b-2040-4422-b411-f526d3d4b2d7', 'scsi-SQEMU_QEMU_HARDDISK_2cf4645b-2040-4422-b411-f526d3d4b2d7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815335 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-11-00-01-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815346 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--4b4178b7--2f3b--5f27--b2b6--7c3306310ac2-osd--block--4b4178b7--2f3b--5f27--b2b6--7c3306310ac2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Hov3a8-9iDi-As7D-PKbZ-qStP-xNc3-IUTecg', 'scsi-0QEMU_QEMU_HARDDISK_8046510b-ad40-4feb-b71a-a7eb3fa57256', 'scsi-SQEMU_QEMU_HARDDISK_8046510b-ad40-4feb-b71a-a7eb3fa57256'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815358 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815386 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4713353-19f0-445a-bb8a-6a961d38a233', 'scsi-SQEMU_QEMU_HARDDISK_d4713353-19f0-445a-bb8a-6a961d38a233'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815398 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815409 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815425 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815436 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815447 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-11-00-01-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815473 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815492 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc', 'scsi-SQEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc-part1', 'scsi-SQEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc-part14', 'scsi-SQEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc-part15', 'scsi-SQEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc-part16', 'scsi-SQEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815504 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815522 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--1fcfbff8--db79--5f3f--a505--ec8e716f38d6-osd--block--1fcfbff8--db79--5f3f--a505--ec8e716f38d6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-A6YIqM-Tf5f-HngE-xa6U-QuYg-PMeg-ro81Ui', 'scsi-0QEMU_QEMU_HARDDISK_f923a2d7-e50a-4a10-a63c-46b2772477f3', 'scsi-SQEMU_QEMU_HARDDISK_f923a2d7-e50a-4a10-a63c-46b2772477f3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815544 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815555 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815575 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8a3e2512--7b8b--5f78--845d--17a09314c972-osd--block--8a3e2512--7b8b--5f78--845d--17a09314c972'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1OzxHd-LuzN-1Wex-64m7-1qYQ-8vcr-kBg1JM', 'scsi-0QEMU_QEMU_HARDDISK_75033e65-6f8e-4260-8d0b-0f414b2e283a', 'scsi-SQEMU_QEMU_HARDDISK_75033e65-6f8e-4260-8d0b-0f414b2e283a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815596 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815615 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815660 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6b39b37-1573-4813-a204-b3511a0e9470', 'scsi-SQEMU_QEMU_HARDDISK_c6b39b37-1573-4813-a204-b3511a0e9470'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6b39b37-1573-4813-a204-b3511a0e9470-part1', 'scsi-SQEMU_QEMU_HARDDISK_c6b39b37-1573-4813-a204-b3511a0e9470-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6b39b37-1573-4813-a204-b3511a0e9470-part14', 'scsi-SQEMU_QEMU_HARDDISK_c6b39b37-1573-4813-a204-b3511a0e9470-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6b39b37-1573-4813-a204-b3511a0e9470-part15', 'scsi-SQEMU_QEMU_HARDDISK_c6b39b37-1573-4813-a204-b3511a0e9470-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6b39b37-1573-4813-a204-b3511a0e9470-part16', 'scsi-SQEMU_QEMU_HARDDISK_c6b39b37-1573-4813-a204-b3511a0e9470-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815689 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e2337cd-4ca4-43ee-9815-6c22aae7aa7a', 'scsi-SQEMU_QEMU_HARDDISK_7e2337cd-4ca4-43ee-9815-6c22aae7aa7a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815709 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-11-00-01-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815730 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-11-00-01-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815769 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.815788 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815801 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815812 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815829 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815841 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815852 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815876 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815888 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815906 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb39152d-f0f1-4dbf-b4d6-619450119bfd', 'scsi-SQEMU_QEMU_HARDDISK_fb39152d-f0f1-4dbf-b4d6-619450119bfd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb39152d-f0f1-4dbf-b4d6-619450119bfd-part1', 'scsi-SQEMU_QEMU_HARDDISK_fb39152d-f0f1-4dbf-b4d6-619450119bfd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb39152d-f0f1-4dbf-b4d6-619450119bfd-part14', 'scsi-SQEMU_QEMU_HARDDISK_fb39152d-f0f1-4dbf-b4d6-619450119bfd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb39152d-f0f1-4dbf-b4d6-619450119bfd-part15', 'scsi-SQEMU_QEMU_HARDDISK_fb39152d-f0f1-4dbf-b4d6-619450119bfd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb39152d-f0f1-4dbf-b4d6-619450119bfd-part16', 'scsi-SQEMU_QEMU_HARDDISK_fb39152d-f0f1-4dbf-b4d6-619450119bfd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815925 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-11-00-01-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815936 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.815947 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.815958 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.815969 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.815987 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.815999 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.816011 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.816037 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.816057 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.816087 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.816156 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.816172 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.816201 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb43bbb0-6966-49ea-aa1a-91c534974a2c', 'scsi-SQEMU_QEMU_HARDDISK_bb43bbb0-6966-49ea-aa1a-91c534974a2c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb43bbb0-6966-49ea-aa1a-91c534974a2c-part1', 'scsi-SQEMU_QEMU_HARDDISK_bb43bbb0-6966-49ea-aa1a-91c534974a2c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb43bbb0-6966-49ea-aa1a-91c534974a2c-part14', 'scsi-SQEMU_QEMU_HARDDISK_bb43bbb0-6966-49ea-aa1a-91c534974a2c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb43bbb0-6966-49ea-aa1a-91c534974a2c-part15', 'scsi-SQEMU_QEMU_HARDDISK_bb43bbb0-6966-49ea-aa1a-91c534974a2c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb43bbb0-6966-49ea-aa1a-91c534974a2c-part16', 'scsi-SQEMU_QEMU_HARDDISK_bb43bbb0-6966-49ea-aa1a-91c534974a2c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.816240 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-11-00-01-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:55:02.816259 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.816275 | orchestrator | 2025-09-11 00:55:02.816287 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-11 00:55:02.816298 | orchestrator | Thursday 11 September 2025 00:45:05 +0000 (0:00:00.875) 0:00:31.082 **** 2025-09-11 00:55:02.816496 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.816517 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.816528 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.816539 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.816549 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.816560 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.816571 | orchestrator | 2025-09-11 00:55:02.816582 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-11 00:55:02.816593 | orchestrator | Thursday 11 September 2025 00:45:06 +0000 (0:00:01.142) 0:00:32.225 **** 2025-09-11 00:55:02.816604 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.816614 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.816625 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.816636 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.816646 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.816657 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.816668 | orchestrator | 2025-09-11 00:55:02.816679 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-11 00:55:02.816690 | orchestrator | Thursday 11 September 2025 00:45:08 +0000 (0:00:01.036) 0:00:33.262 **** 2025-09-11 00:55:02.816700 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.816711 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.816722 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.816733 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.816743 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.816754 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.816765 | orchestrator | 2025-09-11 00:55:02.816776 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-11 00:55:02.816786 | orchestrator | Thursday 11 September 2025 00:45:09 +0000 (0:00:01.247) 0:00:34.509 **** 2025-09-11 00:55:02.816797 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.816808 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.816819 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.816829 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.816840 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.816850 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.816861 | orchestrator | 2025-09-11 00:55:02.816872 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-11 00:55:02.816882 | orchestrator | Thursday 11 September 2025 00:45:09 +0000 (0:00:00.582) 0:00:35.092 **** 2025-09-11 00:55:02.816902 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.816913 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.816924 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.816935 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.816976 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.816988 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.816999 | orchestrator | 2025-09-11 00:55:02.817010 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-11 00:55:02.817020 | orchestrator | Thursday 11 September 2025 00:45:10 +0000 (0:00:01.042) 0:00:36.134 **** 2025-09-11 00:55:02.817031 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.817042 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.817059 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.817070 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.817081 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.817092 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.817102 | orchestrator | 2025-09-11 00:55:02.817143 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-11 00:55:02.817157 | orchestrator | Thursday 11 September 2025 00:45:11 +0000 (0:00:00.634) 0:00:36.769 **** 2025-09-11 00:55:02.817168 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-11 00:55:02.817180 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-11 00:55:02.817190 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-11 00:55:02.817201 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-11 00:55:02.817212 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-11 00:55:02.817222 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-09-11 00:55:02.817233 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-11 00:55:02.817243 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-11 00:55:02.817254 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-09-11 00:55:02.817265 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-11 00:55:02.817275 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-09-11 00:55:02.817286 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-11 00:55:02.817297 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-09-11 00:55:02.817307 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-11 00:55:02.817318 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-09-11 00:55:02.817328 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-09-11 00:55:02.817339 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-11 00:55:02.817350 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-11 00:55:02.817360 | orchestrator | 2025-09-11 00:55:02.817371 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-11 00:55:02.817382 | orchestrator | Thursday 11 September 2025 00:45:14 +0000 (0:00:02.652) 0:00:39.422 **** 2025-09-11 00:55:02.817393 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-11 00:55:02.817404 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-11 00:55:02.817414 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-11 00:55:02.817425 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.817436 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-11 00:55:02.817447 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-11 00:55:02.817457 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-11 00:55:02.817468 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.817479 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-11 00:55:02.817489 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-11 00:55:02.817508 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-11 00:55:02.817532 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.817543 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-11 00:55:02.817554 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-11 00:55:02.817565 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-11 00:55:02.817575 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.817586 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-11 00:55:02.817597 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-11 00:55:02.817608 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-11 00:55:02.817619 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.817629 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-11 00:55:02.817640 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-11 00:55:02.817651 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-11 00:55:02.817661 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.817672 | orchestrator | 2025-09-11 00:55:02.817683 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-11 00:55:02.817694 | orchestrator | Thursday 11 September 2025 00:45:15 +0000 (0:00:01.052) 0:00:40.474 **** 2025-09-11 00:55:02.817705 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.817716 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.817726 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.817738 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:55:02.817749 | orchestrator | 2025-09-11 00:55:02.817760 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-11 00:55:02.817771 | orchestrator | Thursday 11 September 2025 00:45:16 +0000 (0:00:01.283) 0:00:41.758 **** 2025-09-11 00:55:02.817782 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.817793 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.817804 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.817816 | orchestrator | 2025-09-11 00:55:02.817834 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-11 00:55:02.817852 | orchestrator | Thursday 11 September 2025 00:45:17 +0000 (0:00:00.757) 0:00:42.515 **** 2025-09-11 00:55:02.817869 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.817888 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.817904 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.817921 | orchestrator | 2025-09-11 00:55:02.817938 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-11 00:55:02.817955 | orchestrator | Thursday 11 September 2025 00:45:17 +0000 (0:00:00.518) 0:00:43.034 **** 2025-09-11 00:55:02.817970 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.817992 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.818008 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.818065 | orchestrator | 2025-09-11 00:55:02.818083 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-11 00:55:02.818099 | orchestrator | Thursday 11 September 2025 00:45:18 +0000 (0:00:00.417) 0:00:43.451 **** 2025-09-11 00:55:02.818139 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.818157 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.818174 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.818193 | orchestrator | 2025-09-11 00:55:02.818210 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-11 00:55:02.818227 | orchestrator | Thursday 11 September 2025 00:45:19 +0000 (0:00:01.075) 0:00:44.527 **** 2025-09-11 00:55:02.818246 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-11 00:55:02.818265 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-11 00:55:02.818284 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-11 00:55:02.818315 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.818335 | orchestrator | 2025-09-11 00:55:02.818354 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-11 00:55:02.818373 | orchestrator | Thursday 11 September 2025 00:45:19 +0000 (0:00:00.712) 0:00:45.240 **** 2025-09-11 00:55:02.818391 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-11 00:55:02.818410 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-11 00:55:02.818427 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-11 00:55:02.818443 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.818459 | orchestrator | 2025-09-11 00:55:02.818475 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-11 00:55:02.818491 | orchestrator | Thursday 11 September 2025 00:45:20 +0000 (0:00:00.364) 0:00:45.604 **** 2025-09-11 00:55:02.818508 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-11 00:55:02.818524 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-11 00:55:02.818542 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-11 00:55:02.818559 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.818576 | orchestrator | 2025-09-11 00:55:02.818593 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-11 00:55:02.818612 | orchestrator | Thursday 11 September 2025 00:45:20 +0000 (0:00:00.291) 0:00:45.896 **** 2025-09-11 00:55:02.818629 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.818647 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.818666 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.818685 | orchestrator | 2025-09-11 00:55:02.818702 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-11 00:55:02.818722 | orchestrator | Thursday 11 September 2025 00:45:20 +0000 (0:00:00.314) 0:00:46.210 **** 2025-09-11 00:55:02.818742 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-11 00:55:02.818760 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-11 00:55:02.818778 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-11 00:55:02.818795 | orchestrator | 2025-09-11 00:55:02.818831 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-11 00:55:02.818852 | orchestrator | Thursday 11 September 2025 00:45:21 +0000 (0:00:00.817) 0:00:47.028 **** 2025-09-11 00:55:02.818870 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-11 00:55:02.818888 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-11 00:55:02.818905 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-11 00:55:02.818922 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-11 00:55:02.818940 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-11 00:55:02.818958 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-11 00:55:02.818975 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-11 00:55:02.818992 | orchestrator | 2025-09-11 00:55:02.819010 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-11 00:55:02.819029 | orchestrator | Thursday 11 September 2025 00:45:22 +0000 (0:00:01.159) 0:00:48.188 **** 2025-09-11 00:55:02.819046 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-11 00:55:02.819065 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-11 00:55:02.819084 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-11 00:55:02.819102 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-11 00:55:02.819196 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-11 00:55:02.819217 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-11 00:55:02.819251 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-11 00:55:02.819269 | orchestrator | 2025-09-11 00:55:02.819287 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-11 00:55:02.819305 | orchestrator | Thursday 11 September 2025 00:45:24 +0000 (0:00:01.493) 0:00:49.682 **** 2025-09-11 00:55:02.819322 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:55:02.819342 | orchestrator | 2025-09-11 00:55:02.819359 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-11 00:55:02.819376 | orchestrator | Thursday 11 September 2025 00:45:25 +0000 (0:00:01.168) 0:00:50.851 **** 2025-09-11 00:55:02.819403 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:55:02.819422 | orchestrator | 2025-09-11 00:55:02.819440 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-11 00:55:02.819458 | orchestrator | Thursday 11 September 2025 00:45:27 +0000 (0:00:01.851) 0:00:52.702 **** 2025-09-11 00:55:02.819477 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.819497 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.819515 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.819532 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.819550 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.819569 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.819586 | orchestrator | 2025-09-11 00:55:02.819604 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-11 00:55:02.819623 | orchestrator | Thursday 11 September 2025 00:45:29 +0000 (0:00:01.731) 0:00:54.433 **** 2025-09-11 00:55:02.819641 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.819657 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.819672 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.819688 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.819703 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.819718 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.819732 | orchestrator | 2025-09-11 00:55:02.819748 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-11 00:55:02.819765 | orchestrator | Thursday 11 September 2025 00:45:30 +0000 (0:00:01.122) 0:00:55.556 **** 2025-09-11 00:55:02.819780 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.819796 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.819813 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.819830 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.819845 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.819860 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.819877 | orchestrator | 2025-09-11 00:55:02.819893 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-11 00:55:02.819911 | orchestrator | Thursday 11 September 2025 00:45:31 +0000 (0:00:00.830) 0:00:56.387 **** 2025-09-11 00:55:02.819927 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.819943 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.819960 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.819976 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.819990 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.820006 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.820022 | orchestrator | 2025-09-11 00:55:02.820036 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-11 00:55:02.820051 | orchestrator | Thursday 11 September 2025 00:45:32 +0000 (0:00:01.117) 0:00:57.505 **** 2025-09-11 00:55:02.820066 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.820082 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.820097 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.820197 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.820219 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.820234 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.820247 | orchestrator | 2025-09-11 00:55:02.820261 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-11 00:55:02.820290 | orchestrator | Thursday 11 September 2025 00:45:33 +0000 (0:00:01.084) 0:00:58.589 **** 2025-09-11 00:55:02.820302 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.820313 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.820324 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.820336 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.820348 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.820359 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.820371 | orchestrator | 2025-09-11 00:55:02.820383 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-11 00:55:02.820397 | orchestrator | Thursday 11 September 2025 00:45:34 +0000 (0:00:01.082) 0:00:59.672 **** 2025-09-11 00:55:02.820409 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.820421 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.820435 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.820449 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.820461 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.820474 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.820488 | orchestrator | 2025-09-11 00:55:02.820502 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-11 00:55:02.820515 | orchestrator | Thursday 11 September 2025 00:45:35 +0000 (0:00:00.989) 0:01:00.661 **** 2025-09-11 00:55:02.820528 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.820541 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.820554 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.820566 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.820578 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.820590 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.820603 | orchestrator | 2025-09-11 00:55:02.820615 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-11 00:55:02.820627 | orchestrator | Thursday 11 September 2025 00:45:36 +0000 (0:00:01.425) 0:01:02.086 **** 2025-09-11 00:55:02.820639 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.820652 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.820665 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.820677 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.820690 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.820702 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.820714 | orchestrator | 2025-09-11 00:55:02.820728 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-11 00:55:02.820741 | orchestrator | Thursday 11 September 2025 00:45:37 +0000 (0:00:00.968) 0:01:03.055 **** 2025-09-11 00:55:02.820755 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.820768 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.820780 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.820794 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.820807 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.820821 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.820833 | orchestrator | 2025-09-11 00:55:02.820845 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-11 00:55:02.820859 | orchestrator | Thursday 11 September 2025 00:45:38 +0000 (0:00:00.694) 0:01:03.750 **** 2025-09-11 00:55:02.820882 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.820895 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.820908 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.820922 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.820932 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.820940 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.820948 | orchestrator | 2025-09-11 00:55:02.820956 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-11 00:55:02.820974 | orchestrator | Thursday 11 September 2025 00:45:39 +0000 (0:00:00.680) 0:01:04.430 **** 2025-09-11 00:55:02.820982 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.820989 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.820997 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.821005 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.821013 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.821020 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.821028 | orchestrator | 2025-09-11 00:55:02.821036 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-11 00:55:02.821044 | orchestrator | Thursday 11 September 2025 00:45:39 +0000 (0:00:00.780) 0:01:05.210 **** 2025-09-11 00:55:02.821052 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.821059 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.821067 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.821075 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.821083 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.821091 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.821099 | orchestrator | 2025-09-11 00:55:02.821107 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-11 00:55:02.821142 | orchestrator | Thursday 11 September 2025 00:45:40 +0000 (0:00:00.873) 0:01:06.083 **** 2025-09-11 00:55:02.821150 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.821158 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.821166 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.821174 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.821182 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.821189 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.821197 | orchestrator | 2025-09-11 00:55:02.821205 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-11 00:55:02.821213 | orchestrator | Thursday 11 September 2025 00:45:41 +0000 (0:00:00.743) 0:01:06.827 **** 2025-09-11 00:55:02.821220 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.821228 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.821236 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.821243 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.821251 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.821259 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.821266 | orchestrator | 2025-09-11 00:55:02.821274 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-11 00:55:02.821282 | orchestrator | Thursday 11 September 2025 00:45:42 +0000 (0:00:00.548) 0:01:07.376 **** 2025-09-11 00:55:02.821290 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.821297 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.821305 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.821312 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.821320 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.821328 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.821336 | orchestrator | 2025-09-11 00:55:02.821355 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-11 00:55:02.821363 | orchestrator | Thursday 11 September 2025 00:45:42 +0000 (0:00:00.749) 0:01:08.125 **** 2025-09-11 00:55:02.821371 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.821379 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.821386 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.821394 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.821402 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.821410 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.821418 | orchestrator | 2025-09-11 00:55:02.821425 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-11 00:55:02.821433 | orchestrator | Thursday 11 September 2025 00:45:43 +0000 (0:00:00.624) 0:01:08.750 **** 2025-09-11 00:55:02.821441 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.821449 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.821462 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.821470 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.821477 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.821485 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.821493 | orchestrator | 2025-09-11 00:55:02.821501 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-11 00:55:02.821509 | orchestrator | Thursday 11 September 2025 00:45:44 +0000 (0:00:01.037) 0:01:09.787 **** 2025-09-11 00:55:02.821516 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.821524 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.821532 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.821540 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.821547 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.821555 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.821562 | orchestrator | 2025-09-11 00:55:02.821570 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-09-11 00:55:02.821578 | orchestrator | Thursday 11 September 2025 00:45:45 +0000 (0:00:01.132) 0:01:10.919 **** 2025-09-11 00:55:02.821586 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:55:02.821594 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:55:02.821601 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:55:02.821609 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:55:02.821617 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:55:02.821624 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:55:02.821632 | orchestrator | 2025-09-11 00:55:02.821640 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-09-11 00:55:02.821648 | orchestrator | Thursday 11 September 2025 00:45:47 +0000 (0:00:01.470) 0:01:12.390 **** 2025-09-11 00:55:02.821655 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:55:02.821663 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:55:02.821671 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:55:02.821678 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:55:02.821686 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:55:02.821694 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:55:02.821702 | orchestrator | 2025-09-11 00:55:02.821709 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-09-11 00:55:02.821722 | orchestrator | Thursday 11 September 2025 00:45:49 +0000 (0:00:02.044) 0:01:14.434 **** 2025-09-11 00:55:02.821730 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:55:02.821739 | orchestrator | 2025-09-11 00:55:02.821747 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-09-11 00:55:02.821755 | orchestrator | Thursday 11 September 2025 00:45:50 +0000 (0:00:01.099) 0:01:15.533 **** 2025-09-11 00:55:02.821762 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.821770 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.821778 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.821785 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.821793 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.821801 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.821808 | orchestrator | 2025-09-11 00:55:02.821816 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-09-11 00:55:02.821824 | orchestrator | Thursday 11 September 2025 00:45:50 +0000 (0:00:00.566) 0:01:16.100 **** 2025-09-11 00:55:02.821832 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.821839 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.821847 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.821855 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.821862 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.821870 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.821878 | orchestrator | 2025-09-11 00:55:02.821886 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-09-11 00:55:02.821899 | orchestrator | Thursday 11 September 2025 00:45:51 +0000 (0:00:00.731) 0:01:16.832 **** 2025-09-11 00:55:02.821907 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-11 00:55:02.821915 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-11 00:55:02.821923 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-11 00:55:02.821930 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-11 00:55:02.821938 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-11 00:55:02.821946 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-11 00:55:02.821954 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-11 00:55:02.821962 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-11 00:55:02.821969 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-11 00:55:02.821977 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-11 00:55:02.821985 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-11 00:55:02.821997 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-11 00:55:02.822005 | orchestrator | 2025-09-11 00:55:02.822013 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-09-11 00:55:02.822062 | orchestrator | Thursday 11 September 2025 00:45:52 +0000 (0:00:01.387) 0:01:18.219 **** 2025-09-11 00:55:02.822070 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:55:02.822079 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:55:02.822087 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:55:02.822094 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:55:02.822102 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:55:02.822128 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:55:02.822137 | orchestrator | 2025-09-11 00:55:02.822145 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-09-11 00:55:02.822153 | orchestrator | Thursday 11 September 2025 00:45:54 +0000 (0:00:01.101) 0:01:19.320 **** 2025-09-11 00:55:02.822160 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.822168 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.822176 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.822183 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.822191 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.822199 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.822206 | orchestrator | 2025-09-11 00:55:02.822214 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-09-11 00:55:02.822222 | orchestrator | Thursday 11 September 2025 00:45:54 +0000 (0:00:00.592) 0:01:19.913 **** 2025-09-11 00:55:02.822230 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.822237 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.822245 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.822253 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.822260 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.822268 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.822276 | orchestrator | 2025-09-11 00:55:02.822283 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-09-11 00:55:02.822291 | orchestrator | Thursday 11 September 2025 00:45:55 +0000 (0:00:00.764) 0:01:20.677 **** 2025-09-11 00:55:02.822299 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.822307 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.822314 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.822322 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.822330 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.822337 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.822351 | orchestrator | 2025-09-11 00:55:02.822359 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-09-11 00:55:02.822366 | orchestrator | Thursday 11 September 2025 00:45:55 +0000 (0:00:00.551) 0:01:21.228 **** 2025-09-11 00:55:02.822379 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:55:02.822387 | orchestrator | 2025-09-11 00:55:02.822394 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-09-11 00:55:02.822402 | orchestrator | Thursday 11 September 2025 00:45:57 +0000 (0:00:01.156) 0:01:22.385 **** 2025-09-11 00:55:02.822410 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.822418 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.822426 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.822433 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.822441 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.822449 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.822456 | orchestrator | 2025-09-11 00:55:02.822464 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-09-11 00:55:02.822472 | orchestrator | Thursday 11 September 2025 00:47:03 +0000 (0:01:05.983) 0:02:28.369 **** 2025-09-11 00:55:02.822480 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-11 00:55:02.822487 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-11 00:55:02.822495 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-11 00:55:02.822503 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.822511 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-11 00:55:02.822518 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-11 00:55:02.822526 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-11 00:55:02.822534 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.822542 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-11 00:55:02.822549 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-11 00:55:02.822557 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-11 00:55:02.822565 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.822573 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-11 00:55:02.822580 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-11 00:55:02.822588 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-11 00:55:02.822596 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.822603 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-11 00:55:02.822611 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-11 00:55:02.822619 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-11 00:55:02.822627 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.822635 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-11 00:55:02.822658 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-11 00:55:02.822666 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-11 00:55:02.822674 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.822682 | orchestrator | 2025-09-11 00:55:02.822689 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-09-11 00:55:02.822697 | orchestrator | Thursday 11 September 2025 00:47:03 +0000 (0:00:00.612) 0:02:28.982 **** 2025-09-11 00:55:02.822705 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.822713 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.822726 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.822733 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.822741 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.822749 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.822757 | orchestrator | 2025-09-11 00:55:02.822764 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-09-11 00:55:02.822772 | orchestrator | Thursday 11 September 2025 00:47:04 +0000 (0:00:00.523) 0:02:29.505 **** 2025-09-11 00:55:02.822780 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.822788 | orchestrator | 2025-09-11 00:55:02.822795 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-09-11 00:55:02.822803 | orchestrator | Thursday 11 September 2025 00:47:04 +0000 (0:00:00.270) 0:02:29.776 **** 2025-09-11 00:55:02.822811 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.822818 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.822826 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.822834 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.822841 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.822849 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.822857 | orchestrator | 2025-09-11 00:55:02.822865 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-09-11 00:55:02.822872 | orchestrator | Thursday 11 September 2025 00:47:05 +0000 (0:00:00.550) 0:02:30.326 **** 2025-09-11 00:55:02.822880 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.822888 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.822896 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.822903 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.822911 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.822919 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.822926 | orchestrator | 2025-09-11 00:55:02.822934 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-09-11 00:55:02.822942 | orchestrator | Thursday 11 September 2025 00:47:05 +0000 (0:00:00.605) 0:02:30.932 **** 2025-09-11 00:55:02.822950 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.822958 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.822965 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.822973 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.822980 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.822988 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.822996 | orchestrator | 2025-09-11 00:55:02.823007 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-09-11 00:55:02.823016 | orchestrator | Thursday 11 September 2025 00:47:06 +0000 (0:00:00.533) 0:02:31.465 **** 2025-09-11 00:55:02.823023 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.823031 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.823039 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.823046 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.823054 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.823062 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.823069 | orchestrator | 2025-09-11 00:55:02.823077 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-09-11 00:55:02.823085 | orchestrator | Thursday 11 September 2025 00:47:08 +0000 (0:00:02.624) 0:02:34.089 **** 2025-09-11 00:55:02.823093 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.823100 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.823153 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.823164 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.823172 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.823179 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.823187 | orchestrator | 2025-09-11 00:55:02.823195 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-09-11 00:55:02.823203 | orchestrator | Thursday 11 September 2025 00:47:09 +0000 (0:00:00.697) 0:02:34.787 **** 2025-09-11 00:55:02.823211 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:55:02.823231 | orchestrator | 2025-09-11 00:55:02.823238 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-09-11 00:55:02.823246 | orchestrator | Thursday 11 September 2025 00:47:10 +0000 (0:00:01.085) 0:02:35.873 **** 2025-09-11 00:55:02.823254 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.823262 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.823270 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.823277 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.823285 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.823293 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.823300 | orchestrator | 2025-09-11 00:55:02.823308 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-09-11 00:55:02.823316 | orchestrator | Thursday 11 September 2025 00:47:11 +0000 (0:00:00.453) 0:02:36.326 **** 2025-09-11 00:55:02.823324 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.823331 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.823339 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.823347 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.823354 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.823362 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.823370 | orchestrator | 2025-09-11 00:55:02.823377 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-09-11 00:55:02.823385 | orchestrator | Thursday 11 September 2025 00:47:11 +0000 (0:00:00.897) 0:02:37.224 **** 2025-09-11 00:55:02.823393 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.823401 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.823408 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.823416 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.823424 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.823437 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.823445 | orchestrator | 2025-09-11 00:55:02.823453 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-09-11 00:55:02.823460 | orchestrator | Thursday 11 September 2025 00:47:12 +0000 (0:00:00.595) 0:02:37.819 **** 2025-09-11 00:55:02.823468 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.823476 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.823483 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.823491 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.823499 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.823507 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.823514 | orchestrator | 2025-09-11 00:55:02.823522 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-09-11 00:55:02.823530 | orchestrator | Thursday 11 September 2025 00:47:13 +0000 (0:00:00.857) 0:02:38.677 **** 2025-09-11 00:55:02.823538 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.823545 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.823553 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.823561 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.823568 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.823576 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.823584 | orchestrator | 2025-09-11 00:55:02.823591 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-09-11 00:55:02.823599 | orchestrator | Thursday 11 September 2025 00:47:13 +0000 (0:00:00.536) 0:02:39.213 **** 2025-09-11 00:55:02.823607 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.823615 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.823621 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.823628 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.823634 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.823641 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.823652 | orchestrator | 2025-09-11 00:55:02.823659 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-09-11 00:55:02.823666 | orchestrator | Thursday 11 September 2025 00:47:14 +0000 (0:00:00.976) 0:02:40.190 **** 2025-09-11 00:55:02.823672 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.823679 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.823685 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.823692 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.823698 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.823705 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.823711 | orchestrator | 2025-09-11 00:55:02.823718 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-09-11 00:55:02.823725 | orchestrator | Thursday 11 September 2025 00:47:15 +0000 (0:00:00.753) 0:02:40.943 **** 2025-09-11 00:55:02.823731 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.823738 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.823744 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.823751 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.823757 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.823764 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.823771 | orchestrator | 2025-09-11 00:55:02.823777 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-09-11 00:55:02.823784 | orchestrator | Thursday 11 September 2025 00:47:16 +0000 (0:00:00.751) 0:02:41.695 **** 2025-09-11 00:55:02.823791 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.823798 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.823804 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.823811 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.823817 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.823824 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.823830 | orchestrator | 2025-09-11 00:55:02.823837 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-09-11 00:55:02.823844 | orchestrator | Thursday 11 September 2025 00:47:17 +0000 (0:00:01.145) 0:02:42.840 **** 2025-09-11 00:55:02.823851 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-2, testbed-node-1 2025-09-11 00:55:02.823857 | orchestrator | 2025-09-11 00:55:02.823887 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-09-11 00:55:02.823894 | orchestrator | Thursday 11 September 2025 00:47:18 +0000 (0:00:01.113) 0:02:43.954 **** 2025-09-11 00:55:02.823901 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-09-11 00:55:02.823908 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-09-11 00:55:02.823915 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-09-11 00:55:02.823922 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-09-11 00:55:02.823928 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-09-11 00:55:02.823935 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-09-11 00:55:02.823941 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-09-11 00:55:02.823948 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-09-11 00:55:02.823954 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-09-11 00:55:02.823961 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-09-11 00:55:02.823968 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-09-11 00:55:02.823974 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-09-11 00:55:02.823981 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-09-11 00:55:02.823987 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-09-11 00:55:02.823994 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-09-11 00:55:02.824000 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-09-11 00:55:02.824007 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-09-11 00:55:02.824018 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-09-11 00:55:02.824025 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-09-11 00:55:02.824031 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-09-11 00:55:02.824042 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-09-11 00:55:02.824049 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-09-11 00:55:02.824055 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-09-11 00:55:02.824062 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-09-11 00:55:02.824068 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-09-11 00:55:02.824075 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-09-11 00:55:02.824082 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-09-11 00:55:02.824088 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-09-11 00:55:02.824094 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-09-11 00:55:02.824101 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-09-11 00:55:02.824107 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-09-11 00:55:02.824129 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-09-11 00:55:02.824136 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-09-11 00:55:02.824142 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-09-11 00:55:02.824149 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-09-11 00:55:02.824155 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-09-11 00:55:02.824162 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-09-11 00:55:02.824168 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-09-11 00:55:02.824175 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-09-11 00:55:02.824181 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-09-11 00:55:02.824188 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-09-11 00:55:02.824195 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-09-11 00:55:02.824201 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-09-11 00:55:02.824208 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-09-11 00:55:02.824214 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-09-11 00:55:02.824221 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-09-11 00:55:02.824227 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-11 00:55:02.824234 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-11 00:55:02.824240 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-11 00:55:02.824251 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-09-11 00:55:02.824257 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-09-11 00:55:02.824264 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-11 00:55:02.824271 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-11 00:55:02.824277 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-11 00:55:02.824284 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-11 00:55:02.824290 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-11 00:55:02.824297 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-11 00:55:02.824303 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-11 00:55:02.824310 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-11 00:55:02.824322 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-11 00:55:02.824328 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-11 00:55:02.824335 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-11 00:55:02.824341 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-11 00:55:02.824348 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-11 00:55:02.824354 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-11 00:55:02.824361 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-11 00:55:02.824367 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-11 00:55:02.824374 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-11 00:55:02.824381 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-11 00:55:02.824387 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-11 00:55:02.824394 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-11 00:55:02.824400 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-11 00:55:02.824406 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-11 00:55:02.824413 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-11 00:55:02.824420 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-11 00:55:02.824426 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-11 00:55:02.824433 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-11 00:55:02.824443 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-11 00:55:02.824450 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-11 00:55:02.824456 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-11 00:55:02.824463 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-09-11 00:55:02.824470 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-09-11 00:55:02.824476 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-09-11 00:55:02.824483 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-11 00:55:02.824490 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-11 00:55:02.824496 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-11 00:55:02.824503 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-09-11 00:55:02.824509 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-09-11 00:55:02.824516 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-09-11 00:55:02.824523 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-11 00:55:02.824529 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-09-11 00:55:02.824536 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-09-11 00:55:02.824542 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-09-11 00:55:02.824549 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-09-11 00:55:02.824556 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-09-11 00:55:02.824562 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-09-11 00:55:02.824569 | orchestrator | 2025-09-11 00:55:02.824575 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-09-11 00:55:02.824582 | orchestrator | Thursday 11 September 2025 00:47:25 +0000 (0:00:07.102) 0:02:51.057 **** 2025-09-11 00:55:02.824588 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.824600 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.824606 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.824614 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:55:02.824620 | orchestrator | 2025-09-11 00:55:02.824627 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-09-11 00:55:02.824634 | orchestrator | Thursday 11 September 2025 00:47:26 +0000 (0:00:00.835) 0:02:51.892 **** 2025-09-11 00:55:02.824640 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-11 00:55:02.824651 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-11 00:55:02.824657 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-11 00:55:02.824664 | orchestrator | 2025-09-11 00:55:02.824671 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-09-11 00:55:02.824677 | orchestrator | Thursday 11 September 2025 00:47:27 +0000 (0:00:00.605) 0:02:52.498 **** 2025-09-11 00:55:02.824684 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-11 00:55:02.824691 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-11 00:55:02.824697 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-11 00:55:02.824704 | orchestrator | 2025-09-11 00:55:02.824710 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-09-11 00:55:02.824717 | orchestrator | Thursday 11 September 2025 00:47:28 +0000 (0:00:01.095) 0:02:53.594 **** 2025-09-11 00:55:02.824724 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.824730 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.824737 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.824743 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.824750 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.824756 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.824763 | orchestrator | 2025-09-11 00:55:02.824770 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-09-11 00:55:02.824776 | orchestrator | Thursday 11 September 2025 00:47:29 +0000 (0:00:00.825) 0:02:54.419 **** 2025-09-11 00:55:02.824783 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.824789 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.824796 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.824802 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.824809 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.824815 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.824822 | orchestrator | 2025-09-11 00:55:02.824829 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-09-11 00:55:02.824835 | orchestrator | Thursday 11 September 2025 00:47:29 +0000 (0:00:00.454) 0:02:54.873 **** 2025-09-11 00:55:02.824842 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.824848 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.824855 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.824862 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.824868 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.824875 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.824881 | orchestrator | 2025-09-11 00:55:02.824888 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-09-11 00:55:02.824894 | orchestrator | Thursday 11 September 2025 00:47:30 +0000 (0:00:00.675) 0:02:55.548 **** 2025-09-11 00:55:02.824905 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.824912 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.824927 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.824934 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.824941 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.824947 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.824954 | orchestrator | 2025-09-11 00:55:02.824960 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-09-11 00:55:02.824967 | orchestrator | Thursday 11 September 2025 00:47:30 +0000 (0:00:00.606) 0:02:56.155 **** 2025-09-11 00:55:02.824974 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.824980 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.824987 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.824993 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.825000 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.825006 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.825013 | orchestrator | 2025-09-11 00:55:02.825020 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-09-11 00:55:02.825026 | orchestrator | Thursday 11 September 2025 00:47:31 +0000 (0:00:00.645) 0:02:56.801 **** 2025-09-11 00:55:02.825033 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.825039 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.825046 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.825052 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.825059 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.825065 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.825072 | orchestrator | 2025-09-11 00:55:02.825078 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-09-11 00:55:02.825085 | orchestrator | Thursday 11 September 2025 00:47:32 +0000 (0:00:00.731) 0:02:57.532 **** 2025-09-11 00:55:02.825092 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.825098 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.825105 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.825127 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.825138 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.825149 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.825160 | orchestrator | 2025-09-11 00:55:02.825169 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-09-11 00:55:02.825181 | orchestrator | Thursday 11 September 2025 00:47:33 +0000 (0:00:00.827) 0:02:58.359 **** 2025-09-11 00:55:02.825187 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.825194 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.825200 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.825207 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.825214 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.825220 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.825227 | orchestrator | 2025-09-11 00:55:02.825233 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-09-11 00:55:02.825244 | orchestrator | Thursday 11 September 2025 00:47:33 +0000 (0:00:00.609) 0:02:58.970 **** 2025-09-11 00:55:02.825251 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.825257 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.825264 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.825270 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.825277 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.825283 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.825290 | orchestrator | 2025-09-11 00:55:02.825297 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-09-11 00:55:02.825303 | orchestrator | Thursday 11 September 2025 00:47:36 +0000 (0:00:02.903) 0:03:01.873 **** 2025-09-11 00:55:02.825310 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.825316 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.825323 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.825329 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.825341 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.825347 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.825354 | orchestrator | 2025-09-11 00:55:02.825361 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-09-11 00:55:02.825367 | orchestrator | Thursday 11 September 2025 00:47:37 +0000 (0:00:00.543) 0:03:02.416 **** 2025-09-11 00:55:02.825374 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.825380 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.825387 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.825393 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.825400 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.825406 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.825413 | orchestrator | 2025-09-11 00:55:02.825419 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-09-11 00:55:02.825426 | orchestrator | Thursday 11 September 2025 00:47:37 +0000 (0:00:00.670) 0:03:03.087 **** 2025-09-11 00:55:02.825432 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.825439 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.825445 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.825452 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.825458 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.825465 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.825471 | orchestrator | 2025-09-11 00:55:02.825478 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-09-11 00:55:02.825484 | orchestrator | Thursday 11 September 2025 00:47:38 +0000 (0:00:00.559) 0:03:03.646 **** 2025-09-11 00:55:02.825491 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-11 00:55:02.825498 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-11 00:55:02.825504 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-11 00:55:02.825511 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.825518 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.825524 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.825531 | orchestrator | 2025-09-11 00:55:02.825541 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-09-11 00:55:02.825548 | orchestrator | Thursday 11 September 2025 00:47:39 +0000 (0:00:00.744) 0:03:04.391 **** 2025-09-11 00:55:02.825557 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-09-11 00:55:02.825566 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-09-11 00:55:02.825574 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.825581 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-09-11 00:55:02.825588 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-09-11 00:55:02.825599 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-09-11 00:55:02.825610 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-09-11 00:55:02.825617 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.825624 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.825631 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.825637 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.825644 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.825650 | orchestrator | 2025-09-11 00:55:02.825657 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-09-11 00:55:02.825663 | orchestrator | Thursday 11 September 2025 00:47:39 +0000 (0:00:00.660) 0:03:05.052 **** 2025-09-11 00:55:02.825670 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.825676 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.825683 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.825690 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.825696 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.825703 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.825709 | orchestrator | 2025-09-11 00:55:02.825716 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-09-11 00:55:02.825722 | orchestrator | Thursday 11 September 2025 00:47:40 +0000 (0:00:00.641) 0:03:05.693 **** 2025-09-11 00:55:02.825729 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.825735 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.825742 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.825748 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.825755 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.825761 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.825768 | orchestrator | 2025-09-11 00:55:02.825775 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-11 00:55:02.825781 | orchestrator | Thursday 11 September 2025 00:47:40 +0000 (0:00:00.537) 0:03:06.230 **** 2025-09-11 00:55:02.825788 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.825794 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.825801 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.825807 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.825813 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.825820 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.825826 | orchestrator | 2025-09-11 00:55:02.825833 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-11 00:55:02.825840 | orchestrator | Thursday 11 September 2025 00:47:41 +0000 (0:00:00.952) 0:03:07.183 **** 2025-09-11 00:55:02.825846 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.825853 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.825859 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.825865 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.825872 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.825878 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.825885 | orchestrator | 2025-09-11 00:55:02.825891 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-11 00:55:02.825898 | orchestrator | Thursday 11 September 2025 00:47:42 +0000 (0:00:00.537) 0:03:07.720 **** 2025-09-11 00:55:02.825905 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.825915 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.825921 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.825932 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.825939 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.825946 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.825952 | orchestrator | 2025-09-11 00:55:02.825959 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-11 00:55:02.825965 | orchestrator | Thursday 11 September 2025 00:47:43 +0000 (0:00:00.646) 0:03:08.367 **** 2025-09-11 00:55:02.825972 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.825978 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.825985 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.825992 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.825998 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.826005 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.826011 | orchestrator | 2025-09-11 00:55:02.826107 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-11 00:55:02.826156 | orchestrator | Thursday 11 September 2025 00:47:43 +0000 (0:00:00.781) 0:03:09.148 **** 2025-09-11 00:55:02.826163 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-11 00:55:02.826170 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-11 00:55:02.826176 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-11 00:55:02.826183 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.826190 | orchestrator | 2025-09-11 00:55:02.826196 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-11 00:55:02.826202 | orchestrator | Thursday 11 September 2025 00:47:44 +0000 (0:00:00.388) 0:03:09.537 **** 2025-09-11 00:55:02.826208 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-11 00:55:02.826214 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-11 00:55:02.826220 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-11 00:55:02.826226 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.826232 | orchestrator | 2025-09-11 00:55:02.826238 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-11 00:55:02.826244 | orchestrator | Thursday 11 September 2025 00:47:44 +0000 (0:00:00.556) 0:03:10.093 **** 2025-09-11 00:55:02.826250 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-11 00:55:02.826256 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-11 00:55:02.826262 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-11 00:55:02.826268 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.826274 | orchestrator | 2025-09-11 00:55:02.826280 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-11 00:55:02.826287 | orchestrator | Thursday 11 September 2025 00:47:45 +0000 (0:00:00.627) 0:03:10.721 **** 2025-09-11 00:55:02.826293 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.826299 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.826309 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.826315 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.826322 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.826328 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.826334 | orchestrator | 2025-09-11 00:55:02.826340 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-11 00:55:02.826346 | orchestrator | Thursday 11 September 2025 00:47:46 +0000 (0:00:00.990) 0:03:11.712 **** 2025-09-11 00:55:02.826352 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-11 00:55:02.826358 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-11 00:55:02.826364 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-11 00:55:02.826370 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-09-11 00:55:02.826376 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-09-11 00:55:02.826383 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.826389 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.826395 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-09-11 00:55:02.826409 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.826415 | orchestrator | 2025-09-11 00:55:02.826421 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-09-11 00:55:02.826427 | orchestrator | Thursday 11 September 2025 00:47:48 +0000 (0:00:01.557) 0:03:13.270 **** 2025-09-11 00:55:02.826433 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:55:02.826439 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:55:02.826445 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:55:02.826452 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:55:02.826457 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:55:02.826464 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:55:02.826470 | orchestrator | 2025-09-11 00:55:02.826476 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-11 00:55:02.826482 | orchestrator | Thursday 11 September 2025 00:47:50 +0000 (0:00:02.176) 0:03:15.446 **** 2025-09-11 00:55:02.826488 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:55:02.826494 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:55:02.826500 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:55:02.826506 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:55:02.826512 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:55:02.826518 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:55:02.826524 | orchestrator | 2025-09-11 00:55:02.826530 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-11 00:55:02.826536 | orchestrator | Thursday 11 September 2025 00:47:51 +0000 (0:00:01.297) 0:03:16.744 **** 2025-09-11 00:55:02.826542 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.826548 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.826554 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.826560 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:55:02.826566 | orchestrator | 2025-09-11 00:55:02.826573 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-11 00:55:02.826579 | orchestrator | Thursday 11 September 2025 00:47:52 +0000 (0:00:00.882) 0:03:17.626 **** 2025-09-11 00:55:02.826585 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.826591 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.826597 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.826603 | orchestrator | 2025-09-11 00:55:02.826634 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-11 00:55:02.826641 | orchestrator | Thursday 11 September 2025 00:47:52 +0000 (0:00:00.274) 0:03:17.901 **** 2025-09-11 00:55:02.826648 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:55:02.826654 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:55:02.826660 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:55:02.826666 | orchestrator | 2025-09-11 00:55:02.826672 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-11 00:55:02.826678 | orchestrator | Thursday 11 September 2025 00:47:54 +0000 (0:00:01.383) 0:03:19.285 **** 2025-09-11 00:55:02.826684 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-11 00:55:02.826690 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-11 00:55:02.826696 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-11 00:55:02.826702 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.826708 | orchestrator | 2025-09-11 00:55:02.826714 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-11 00:55:02.826721 | orchestrator | Thursday 11 September 2025 00:47:54 +0000 (0:00:00.719) 0:03:20.005 **** 2025-09-11 00:55:02.826727 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.826733 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.826739 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.826745 | orchestrator | 2025-09-11 00:55:02.826751 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-11 00:55:02.826757 | orchestrator | Thursday 11 September 2025 00:47:55 +0000 (0:00:00.395) 0:03:20.401 **** 2025-09-11 00:55:02.826768 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.826774 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.826780 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.826786 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:55:02.826792 | orchestrator | 2025-09-11 00:55:02.826798 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-11 00:55:02.826804 | orchestrator | Thursday 11 September 2025 00:47:56 +0000 (0:00:01.050) 0:03:21.452 **** 2025-09-11 00:55:02.826810 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-11 00:55:02.826816 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-11 00:55:02.826822 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-11 00:55:02.826828 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.826834 | orchestrator | 2025-09-11 00:55:02.826840 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-11 00:55:02.826846 | orchestrator | Thursday 11 September 2025 00:47:56 +0000 (0:00:00.359) 0:03:21.811 **** 2025-09-11 00:55:02.826852 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.826859 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.826868 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.826874 | orchestrator | 2025-09-11 00:55:02.826880 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-11 00:55:02.826887 | orchestrator | Thursday 11 September 2025 00:47:57 +0000 (0:00:00.440) 0:03:22.252 **** 2025-09-11 00:55:02.826893 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.826899 | orchestrator | 2025-09-11 00:55:02.826905 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-11 00:55:02.826911 | orchestrator | Thursday 11 September 2025 00:47:58 +0000 (0:00:01.003) 0:03:23.255 **** 2025-09-11 00:55:02.826917 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.826923 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.826929 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.826935 | orchestrator | 2025-09-11 00:55:02.826941 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-11 00:55:02.826947 | orchestrator | Thursday 11 September 2025 00:47:58 +0000 (0:00:00.391) 0:03:23.647 **** 2025-09-11 00:55:02.826953 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.826959 | orchestrator | 2025-09-11 00:55:02.826965 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-11 00:55:02.826972 | orchestrator | Thursday 11 September 2025 00:47:58 +0000 (0:00:00.300) 0:03:23.947 **** 2025-09-11 00:55:02.826978 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.826984 | orchestrator | 2025-09-11 00:55:02.826990 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-11 00:55:02.826996 | orchestrator | Thursday 11 September 2025 00:47:58 +0000 (0:00:00.291) 0:03:24.239 **** 2025-09-11 00:55:02.827002 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.827008 | orchestrator | 2025-09-11 00:55:02.827014 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-11 00:55:02.827020 | orchestrator | Thursday 11 September 2025 00:47:59 +0000 (0:00:00.109) 0:03:24.348 **** 2025-09-11 00:55:02.827026 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.827032 | orchestrator | 2025-09-11 00:55:02.827038 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-11 00:55:02.827044 | orchestrator | Thursday 11 September 2025 00:47:59 +0000 (0:00:00.300) 0:03:24.649 **** 2025-09-11 00:55:02.827050 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.827056 | orchestrator | 2025-09-11 00:55:02.827062 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-11 00:55:02.827068 | orchestrator | Thursday 11 September 2025 00:47:59 +0000 (0:00:00.170) 0:03:24.819 **** 2025-09-11 00:55:02.827074 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-11 00:55:02.827085 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-11 00:55:02.827091 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-11 00:55:02.827097 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.827103 | orchestrator | 2025-09-11 00:55:02.827125 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-11 00:55:02.827132 | orchestrator | Thursday 11 September 2025 00:47:59 +0000 (0:00:00.307) 0:03:25.127 **** 2025-09-11 00:55:02.827139 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.827164 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.827172 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.827178 | orchestrator | 2025-09-11 00:55:02.827184 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-11 00:55:02.827190 | orchestrator | Thursday 11 September 2025 00:48:00 +0000 (0:00:00.462) 0:03:25.589 **** 2025-09-11 00:55:02.827196 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.827202 | orchestrator | 2025-09-11 00:55:02.827208 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-11 00:55:02.827214 | orchestrator | Thursday 11 September 2025 00:48:00 +0000 (0:00:00.185) 0:03:25.775 **** 2025-09-11 00:55:02.827220 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.827226 | orchestrator | 2025-09-11 00:55:02.827232 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-11 00:55:02.827239 | orchestrator | Thursday 11 September 2025 00:48:00 +0000 (0:00:00.165) 0:03:25.941 **** 2025-09-11 00:55:02.827245 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.827251 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.827257 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.827263 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:55:02.827269 | orchestrator | 2025-09-11 00:55:02.827275 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-11 00:55:02.827281 | orchestrator | Thursday 11 September 2025 00:48:01 +0000 (0:00:00.672) 0:03:26.614 **** 2025-09-11 00:55:02.827287 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.827293 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.827299 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.827306 | orchestrator | 2025-09-11 00:55:02.827312 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-11 00:55:02.827318 | orchestrator | Thursday 11 September 2025 00:48:02 +0000 (0:00:00.698) 0:03:27.312 **** 2025-09-11 00:55:02.827324 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:55:02.827330 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:55:02.827336 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:55:02.827342 | orchestrator | 2025-09-11 00:55:02.827348 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-11 00:55:02.827354 | orchestrator | Thursday 11 September 2025 00:48:03 +0000 (0:00:01.396) 0:03:28.709 **** 2025-09-11 00:55:02.827360 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-11 00:55:02.827366 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-11 00:55:02.827372 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-11 00:55:02.827378 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.827384 | orchestrator | 2025-09-11 00:55:02.827390 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-11 00:55:02.827396 | orchestrator | Thursday 11 September 2025 00:48:03 +0000 (0:00:00.491) 0:03:29.200 **** 2025-09-11 00:55:02.827406 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.827412 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.827418 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.827424 | orchestrator | 2025-09-11 00:55:02.827431 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-11 00:55:02.827437 | orchestrator | Thursday 11 September 2025 00:48:04 +0000 (0:00:00.333) 0:03:29.534 **** 2025-09-11 00:55:02.827447 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.827453 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.827460 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.827466 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:55:02.827472 | orchestrator | 2025-09-11 00:55:02.827478 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-11 00:55:02.827484 | orchestrator | Thursday 11 September 2025 00:48:05 +0000 (0:00:00.909) 0:03:30.443 **** 2025-09-11 00:55:02.827490 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.827496 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.827502 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.827508 | orchestrator | 2025-09-11 00:55:02.827515 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-11 00:55:02.827521 | orchestrator | Thursday 11 September 2025 00:48:05 +0000 (0:00:00.303) 0:03:30.746 **** 2025-09-11 00:55:02.827527 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:55:02.827533 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:55:02.827539 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:55:02.827545 | orchestrator | 2025-09-11 00:55:02.827551 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-11 00:55:02.827557 | orchestrator | Thursday 11 September 2025 00:48:06 +0000 (0:00:01.397) 0:03:32.144 **** 2025-09-11 00:55:02.827563 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-11 00:55:02.827570 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-11 00:55:02.827576 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-11 00:55:02.827582 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.827588 | orchestrator | 2025-09-11 00:55:02.827594 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-11 00:55:02.827600 | orchestrator | Thursday 11 September 2025 00:48:07 +0000 (0:00:00.452) 0:03:32.597 **** 2025-09-11 00:55:02.827606 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.827612 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.827618 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.827624 | orchestrator | 2025-09-11 00:55:02.827630 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-09-11 00:55:02.827637 | orchestrator | Thursday 11 September 2025 00:48:07 +0000 (0:00:00.302) 0:03:32.900 **** 2025-09-11 00:55:02.827643 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.827649 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.827655 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.827661 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.827667 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.827673 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.827679 | orchestrator | 2025-09-11 00:55:02.827685 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-11 00:55:02.827709 | orchestrator | Thursday 11 September 2025 00:48:08 +0000 (0:00:00.580) 0:03:33.481 **** 2025-09-11 00:55:02.827717 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.827723 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.827729 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.827735 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-2, testbed-node-1 2025-09-11 00:55:02.827741 | orchestrator | 2025-09-11 00:55:02.827747 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-11 00:55:02.827753 | orchestrator | Thursday 11 September 2025 00:48:09 +0000 (0:00:00.857) 0:03:34.338 **** 2025-09-11 00:55:02.827759 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.827765 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.827772 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.827778 | orchestrator | 2025-09-11 00:55:02.827784 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-11 00:55:02.827794 | orchestrator | Thursday 11 September 2025 00:48:09 +0000 (0:00:00.318) 0:03:34.656 **** 2025-09-11 00:55:02.827800 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:55:02.827806 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:55:02.827812 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:55:02.827818 | orchestrator | 2025-09-11 00:55:02.827824 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-11 00:55:02.827830 | orchestrator | Thursday 11 September 2025 00:48:10 +0000 (0:00:01.399) 0:03:36.056 **** 2025-09-11 00:55:02.827837 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-11 00:55:02.827843 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-11 00:55:02.827849 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-11 00:55:02.827855 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.827861 | orchestrator | 2025-09-11 00:55:02.827867 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-11 00:55:02.827873 | orchestrator | Thursday 11 September 2025 00:48:11 +0000 (0:00:00.446) 0:03:36.502 **** 2025-09-11 00:55:02.827879 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.827885 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.827891 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.827897 | orchestrator | 2025-09-11 00:55:02.827903 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-09-11 00:55:02.827909 | orchestrator | 2025-09-11 00:55:02.827915 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-11 00:55:02.827922 | orchestrator | Thursday 11 September 2025 00:48:11 +0000 (0:00:00.568) 0:03:37.071 **** 2025-09-11 00:55:02.827928 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:55:02.827934 | orchestrator | 2025-09-11 00:55:02.827940 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-11 00:55:02.827950 | orchestrator | Thursday 11 September 2025 00:48:12 +0000 (0:00:00.678) 0:03:37.749 **** 2025-09-11 00:55:02.827956 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-2, testbed-node-1 2025-09-11 00:55:02.827962 | orchestrator | 2025-09-11 00:55:02.827968 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-11 00:55:02.827974 | orchestrator | Thursday 11 September 2025 00:48:13 +0000 (0:00:00.596) 0:03:38.345 **** 2025-09-11 00:55:02.827980 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.827986 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.827992 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.827998 | orchestrator | 2025-09-11 00:55:02.828004 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-11 00:55:02.828010 | orchestrator | Thursday 11 September 2025 00:48:14 +0000 (0:00:00.949) 0:03:39.295 **** 2025-09-11 00:55:02.828016 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.828023 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.828029 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.828035 | orchestrator | 2025-09-11 00:55:02.828041 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-11 00:55:02.828047 | orchestrator | Thursday 11 September 2025 00:48:14 +0000 (0:00:00.229) 0:03:39.525 **** 2025-09-11 00:55:02.828053 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.828059 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.828065 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.828071 | orchestrator | 2025-09-11 00:55:02.828077 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-11 00:55:02.828083 | orchestrator | Thursday 11 September 2025 00:48:14 +0000 (0:00:00.469) 0:03:39.994 **** 2025-09-11 00:55:02.828089 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.828095 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.828106 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.828129 | orchestrator | 2025-09-11 00:55:02.828135 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-11 00:55:02.828141 | orchestrator | Thursday 11 September 2025 00:48:15 +0000 (0:00:00.292) 0:03:40.286 **** 2025-09-11 00:55:02.828148 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.828154 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.828160 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.828166 | orchestrator | 2025-09-11 00:55:02.828172 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-11 00:55:02.828178 | orchestrator | Thursday 11 September 2025 00:48:15 +0000 (0:00:00.766) 0:03:41.052 **** 2025-09-11 00:55:02.828184 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.828190 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.828196 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.828203 | orchestrator | 2025-09-11 00:55:02.828209 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-11 00:55:02.828215 | orchestrator | Thursday 11 September 2025 00:48:16 +0000 (0:00:00.271) 0:03:41.324 **** 2025-09-11 00:55:02.828221 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.828227 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.828233 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.828239 | orchestrator | 2025-09-11 00:55:02.828265 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-11 00:55:02.828272 | orchestrator | Thursday 11 September 2025 00:48:16 +0000 (0:00:00.431) 0:03:41.756 **** 2025-09-11 00:55:02.828279 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.828285 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.828291 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.828297 | orchestrator | 2025-09-11 00:55:02.828303 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-11 00:55:02.828309 | orchestrator | Thursday 11 September 2025 00:48:17 +0000 (0:00:00.659) 0:03:42.415 **** 2025-09-11 00:55:02.828315 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.828321 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.828327 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.828333 | orchestrator | 2025-09-11 00:55:02.828339 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-11 00:55:02.828345 | orchestrator | Thursday 11 September 2025 00:48:17 +0000 (0:00:00.678) 0:03:43.094 **** 2025-09-11 00:55:02.828351 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.828357 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.828363 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.828369 | orchestrator | 2025-09-11 00:55:02.828375 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-11 00:55:02.828381 | orchestrator | Thursday 11 September 2025 00:48:18 +0000 (0:00:00.307) 0:03:43.401 **** 2025-09-11 00:55:02.828388 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.828394 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.828400 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.828406 | orchestrator | 2025-09-11 00:55:02.828412 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-11 00:55:02.828418 | orchestrator | Thursday 11 September 2025 00:48:18 +0000 (0:00:00.425) 0:03:43.827 **** 2025-09-11 00:55:02.828424 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.828430 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.828436 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.828442 | orchestrator | 2025-09-11 00:55:02.828448 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-11 00:55:02.828454 | orchestrator | Thursday 11 September 2025 00:48:18 +0000 (0:00:00.256) 0:03:44.083 **** 2025-09-11 00:55:02.828460 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.828466 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.828472 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.828478 | orchestrator | 2025-09-11 00:55:02.828484 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-11 00:55:02.828495 | orchestrator | Thursday 11 September 2025 00:48:19 +0000 (0:00:00.256) 0:03:44.340 **** 2025-09-11 00:55:02.828501 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.828507 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.828513 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.828519 | orchestrator | 2025-09-11 00:55:02.828525 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-11 00:55:02.828535 | orchestrator | Thursday 11 September 2025 00:48:19 +0000 (0:00:00.277) 0:03:44.618 **** 2025-09-11 00:55:02.828541 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.828547 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.828553 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.828559 | orchestrator | 2025-09-11 00:55:02.828565 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-11 00:55:02.828571 | orchestrator | Thursday 11 September 2025 00:48:19 +0000 (0:00:00.384) 0:03:45.002 **** 2025-09-11 00:55:02.828577 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.828583 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.828589 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.828595 | orchestrator | 2025-09-11 00:55:02.828601 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-11 00:55:02.828607 | orchestrator | Thursday 11 September 2025 00:48:20 +0000 (0:00:00.256) 0:03:45.258 **** 2025-09-11 00:55:02.828614 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.828619 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.828626 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.828632 | orchestrator | 2025-09-11 00:55:02.828638 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-11 00:55:02.828644 | orchestrator | Thursday 11 September 2025 00:48:20 +0000 (0:00:00.260) 0:03:45.519 **** 2025-09-11 00:55:02.828650 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.828656 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.828662 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.828668 | orchestrator | 2025-09-11 00:55:02.828674 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-11 00:55:02.828680 | orchestrator | Thursday 11 September 2025 00:48:20 +0000 (0:00:00.236) 0:03:45.755 **** 2025-09-11 00:55:02.828686 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.828692 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.828698 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.828704 | orchestrator | 2025-09-11 00:55:02.828710 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-09-11 00:55:02.828716 | orchestrator | Thursday 11 September 2025 00:48:21 +0000 (0:00:00.589) 0:03:46.345 **** 2025-09-11 00:55:02.828722 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.828728 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.828734 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.828740 | orchestrator | 2025-09-11 00:55:02.828746 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-09-11 00:55:02.828753 | orchestrator | Thursday 11 September 2025 00:48:21 +0000 (0:00:00.285) 0:03:46.630 **** 2025-09-11 00:55:02.828759 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-1, testbed-node-0, testbed-node-2 2025-09-11 00:55:02.828765 | orchestrator | 2025-09-11 00:55:02.828771 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-09-11 00:55:02.828777 | orchestrator | Thursday 11 September 2025 00:48:21 +0000 (0:00:00.465) 0:03:47.095 **** 2025-09-11 00:55:02.828783 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.828789 | orchestrator | 2025-09-11 00:55:02.828795 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-09-11 00:55:02.828819 | orchestrator | Thursday 11 September 2025 00:48:22 +0000 (0:00:00.269) 0:03:47.365 **** 2025-09-11 00:55:02.828826 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-09-11 00:55:02.828838 | orchestrator | 2025-09-11 00:55:02.828844 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-09-11 00:55:02.828850 | orchestrator | Thursday 11 September 2025 00:48:22 +0000 (0:00:00.726) 0:03:48.091 **** 2025-09-11 00:55:02.828856 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.828862 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.828869 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.828875 | orchestrator | 2025-09-11 00:55:02.828881 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-09-11 00:55:02.828887 | orchestrator | Thursday 11 September 2025 00:48:23 +0000 (0:00:00.267) 0:03:48.359 **** 2025-09-11 00:55:02.828893 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.828899 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.828905 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.828911 | orchestrator | 2025-09-11 00:55:02.828917 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-09-11 00:55:02.828924 | orchestrator | Thursday 11 September 2025 00:48:23 +0000 (0:00:00.293) 0:03:48.652 **** 2025-09-11 00:55:02.828930 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:55:02.828936 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:55:02.828942 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:55:02.828948 | orchestrator | 2025-09-11 00:55:02.828954 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-09-11 00:55:02.828960 | orchestrator | Thursday 11 September 2025 00:48:24 +0000 (0:00:01.190) 0:03:49.843 **** 2025-09-11 00:55:02.828966 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:55:02.828972 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:55:02.828978 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:55:02.828984 | orchestrator | 2025-09-11 00:55:02.828990 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-09-11 00:55:02.828996 | orchestrator | Thursday 11 September 2025 00:48:25 +0000 (0:00:00.913) 0:03:50.756 **** 2025-09-11 00:55:02.829002 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:55:02.829008 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:55:02.829014 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:55:02.829020 | orchestrator | 2025-09-11 00:55:02.829026 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-09-11 00:55:02.829033 | orchestrator | Thursday 11 September 2025 00:48:26 +0000 (0:00:00.604) 0:03:51.361 **** 2025-09-11 00:55:02.829039 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.829045 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.829051 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.829057 | orchestrator | 2025-09-11 00:55:02.829063 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-09-11 00:55:02.829069 | orchestrator | Thursday 11 September 2025 00:48:26 +0000 (0:00:00.662) 0:03:52.024 **** 2025-09-11 00:55:02.829075 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:55:02.829081 | orchestrator | 2025-09-11 00:55:02.829087 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-09-11 00:55:02.829096 | orchestrator | Thursday 11 September 2025 00:48:28 +0000 (0:00:01.234) 0:03:53.258 **** 2025-09-11 00:55:02.829103 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.829123 | orchestrator | 2025-09-11 00:55:02.829134 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-09-11 00:55:02.829146 | orchestrator | Thursday 11 September 2025 00:48:28 +0000 (0:00:00.661) 0:03:53.920 **** 2025-09-11 00:55:02.829156 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-11 00:55:02.829166 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-11 00:55:02.829176 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-11 00:55:02.829183 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-11 00:55:02.829190 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-09-11 00:55:02.829196 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-11 00:55:02.829207 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-11 00:55:02.829213 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-09-11 00:55:02.829219 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-11 00:55:02.829225 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-09-11 00:55:02.829231 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-09-11 00:55:02.829238 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-09-11 00:55:02.829244 | orchestrator | 2025-09-11 00:55:02.829250 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-09-11 00:55:02.829256 | orchestrator | Thursday 11 September 2025 00:48:32 +0000 (0:00:03.450) 0:03:57.370 **** 2025-09-11 00:55:02.829262 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:55:02.829268 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:55:02.829274 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:55:02.829280 | orchestrator | 2025-09-11 00:55:02.829286 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-09-11 00:55:02.829292 | orchestrator | Thursday 11 September 2025 00:48:33 +0000 (0:00:01.424) 0:03:58.795 **** 2025-09-11 00:55:02.829299 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.829305 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.829311 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.829317 | orchestrator | 2025-09-11 00:55:02.829323 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-09-11 00:55:02.829329 | orchestrator | Thursday 11 September 2025 00:48:33 +0000 (0:00:00.346) 0:03:59.142 **** 2025-09-11 00:55:02.829335 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.829341 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.829347 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.829353 | orchestrator | 2025-09-11 00:55:02.829360 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-09-11 00:55:02.829366 | orchestrator | Thursday 11 September 2025 00:48:34 +0000 (0:00:00.354) 0:03:59.496 **** 2025-09-11 00:55:02.829372 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:55:02.829378 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:55:02.829384 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:55:02.829390 | orchestrator | 2025-09-11 00:55:02.829418 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-09-11 00:55:02.829425 | orchestrator | Thursday 11 September 2025 00:48:35 +0000 (0:00:01.512) 0:04:01.009 **** 2025-09-11 00:55:02.829431 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:55:02.829437 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:55:02.829443 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:55:02.829449 | orchestrator | 2025-09-11 00:55:02.829455 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-09-11 00:55:02.829462 | orchestrator | Thursday 11 September 2025 00:48:37 +0000 (0:00:01.417) 0:04:02.426 **** 2025-09-11 00:55:02.829468 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.829474 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.829480 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.829486 | orchestrator | 2025-09-11 00:55:02.829492 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-09-11 00:55:02.829498 | orchestrator | Thursday 11 September 2025 00:48:37 +0000 (0:00:00.302) 0:04:02.729 **** 2025-09-11 00:55:02.829504 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:55:02.829510 | orchestrator | 2025-09-11 00:55:02.829517 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-09-11 00:55:02.829523 | orchestrator | Thursday 11 September 2025 00:48:38 +0000 (0:00:00.513) 0:04:03.242 **** 2025-09-11 00:55:02.829529 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.829535 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.829541 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.829547 | orchestrator | 2025-09-11 00:55:02.829553 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-09-11 00:55:02.829563 | orchestrator | Thursday 11 September 2025 00:48:38 +0000 (0:00:00.489) 0:04:03.732 **** 2025-09-11 00:55:02.829569 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.829576 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.829582 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.829588 | orchestrator | 2025-09-11 00:55:02.829594 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-09-11 00:55:02.829600 | orchestrator | Thursday 11 September 2025 00:48:38 +0000 (0:00:00.317) 0:04:04.049 **** 2025-09-11 00:55:02.829606 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:55:02.829612 | orchestrator | 2025-09-11 00:55:02.829618 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-09-11 00:55:02.829625 | orchestrator | Thursday 11 September 2025 00:48:39 +0000 (0:00:00.507) 0:04:04.557 **** 2025-09-11 00:55:02.829631 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:55:02.829637 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:55:02.829643 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:55:02.829649 | orchestrator | 2025-09-11 00:55:02.829661 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-09-11 00:55:02.829668 | orchestrator | Thursday 11 September 2025 00:48:41 +0000 (0:00:01.777) 0:04:06.335 **** 2025-09-11 00:55:02.829674 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:55:02.829680 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:55:02.829686 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:55:02.829692 | orchestrator | 2025-09-11 00:55:02.829698 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-09-11 00:55:02.829704 | orchestrator | Thursday 11 September 2025 00:48:42 +0000 (0:00:01.530) 0:04:07.865 **** 2025-09-11 00:55:02.829710 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:55:02.829717 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:55:02.829723 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:55:02.829729 | orchestrator | 2025-09-11 00:55:02.829735 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-09-11 00:55:02.829741 | orchestrator | Thursday 11 September 2025 00:48:44 +0000 (0:00:01.833) 0:04:09.699 **** 2025-09-11 00:55:02.829747 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:55:02.829753 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:55:02.829759 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:55:02.829765 | orchestrator | 2025-09-11 00:55:02.829772 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-09-11 00:55:02.829778 | orchestrator | Thursday 11 September 2025 00:48:46 +0000 (0:00:01.979) 0:04:11.678 **** 2025-09-11 00:55:02.829784 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:55:02.829790 | orchestrator | 2025-09-11 00:55:02.829796 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-09-11 00:55:02.829802 | orchestrator | Thursday 11 September 2025 00:48:47 +0000 (0:00:00.810) 0:04:12.488 **** 2025-09-11 00:55:02.829808 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.829814 | orchestrator | 2025-09-11 00:55:02.829821 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-09-11 00:55:02.829827 | orchestrator | Thursday 11 September 2025 00:48:48 +0000 (0:00:01.292) 0:04:13.781 **** 2025-09-11 00:55:02.829833 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.829839 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.829845 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.829851 | orchestrator | 2025-09-11 00:55:02.829857 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-09-11 00:55:02.829863 | orchestrator | Thursday 11 September 2025 00:48:58 +0000 (0:00:09.686) 0:04:23.468 **** 2025-09-11 00:55:02.829869 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.829876 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.829888 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.829894 | orchestrator | 2025-09-11 00:55:02.829900 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-09-11 00:55:02.829906 | orchestrator | Thursday 11 September 2025 00:48:58 +0000 (0:00:00.299) 0:04:23.767 **** 2025-09-11 00:55:02.829933 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e8c61ec017c9bf724401eae871fdd7d351447254'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-09-11 00:55:02.829943 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e8c61ec017c9bf724401eae871fdd7d351447254'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-09-11 00:55:02.829951 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e8c61ec017c9bf724401eae871fdd7d351447254'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-09-11 00:55:02.829958 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e8c61ec017c9bf724401eae871fdd7d351447254'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-09-11 00:55:02.829965 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e8c61ec017c9bf724401eae871fdd7d351447254'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-09-11 00:55:02.829975 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e8c61ec017c9bf724401eae871fdd7d351447254'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__e8c61ec017c9bf724401eae871fdd7d351447254'}])  2025-09-11 00:55:02.829983 | orchestrator | 2025-09-11 00:55:02.829989 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-11 00:55:02.829996 | orchestrator | Thursday 11 September 2025 00:49:12 +0000 (0:00:14.226) 0:04:37.994 **** 2025-09-11 00:55:02.830002 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.830008 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.830033 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.830041 | orchestrator | 2025-09-11 00:55:02.830047 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-11 00:55:02.830053 | orchestrator | Thursday 11 September 2025 00:49:13 +0000 (0:00:00.292) 0:04:38.287 **** 2025-09-11 00:55:02.830059 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:55:02.830065 | orchestrator | 2025-09-11 00:55:02.830071 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-11 00:55:02.830078 | orchestrator | Thursday 11 September 2025 00:49:13 +0000 (0:00:00.452) 0:04:38.739 **** 2025-09-11 00:55:02.830084 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.830090 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.830101 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.830143 | orchestrator | 2025-09-11 00:55:02.830152 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-11 00:55:02.830158 | orchestrator | Thursday 11 September 2025 00:49:13 +0000 (0:00:00.407) 0:04:39.147 **** 2025-09-11 00:55:02.830165 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.830171 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.830177 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.830183 | orchestrator | 2025-09-11 00:55:02.830189 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-11 00:55:02.830195 | orchestrator | Thursday 11 September 2025 00:49:14 +0000 (0:00:00.246) 0:04:39.393 **** 2025-09-11 00:55:02.830201 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-11 00:55:02.830208 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-11 00:55:02.830214 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-11 00:55:02.830220 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.830226 | orchestrator | 2025-09-11 00:55:02.830232 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-11 00:55:02.830238 | orchestrator | Thursday 11 September 2025 00:49:14 +0000 (0:00:00.478) 0:04:39.872 **** 2025-09-11 00:55:02.830244 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.830250 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.830257 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.830263 | orchestrator | 2025-09-11 00:55:02.830269 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-09-11 00:55:02.830275 | orchestrator | 2025-09-11 00:55:02.830281 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-11 00:55:02.830309 | orchestrator | Thursday 11 September 2025 00:49:15 +0000 (0:00:00.647) 0:04:40.519 **** 2025-09-11 00:55:02.830317 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:55:02.830323 | orchestrator | 2025-09-11 00:55:02.830329 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-11 00:55:02.830335 | orchestrator | Thursday 11 September 2025 00:49:15 +0000 (0:00:00.429) 0:04:40.948 **** 2025-09-11 00:55:02.830341 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:55:02.830348 | orchestrator | 2025-09-11 00:55:02.830354 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-11 00:55:02.830360 | orchestrator | Thursday 11 September 2025 00:49:16 +0000 (0:00:00.439) 0:04:41.387 **** 2025-09-11 00:55:02.830366 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.830372 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.830378 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.830384 | orchestrator | 2025-09-11 00:55:02.830391 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-11 00:55:02.830397 | orchestrator | Thursday 11 September 2025 00:49:16 +0000 (0:00:00.813) 0:04:42.200 **** 2025-09-11 00:55:02.830403 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.830409 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.830415 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.830421 | orchestrator | 2025-09-11 00:55:02.830427 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-11 00:55:02.830433 | orchestrator | Thursday 11 September 2025 00:49:17 +0000 (0:00:00.255) 0:04:42.456 **** 2025-09-11 00:55:02.830439 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.830445 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.830452 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.830458 | orchestrator | 2025-09-11 00:55:02.830464 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-11 00:55:02.830470 | orchestrator | Thursday 11 September 2025 00:49:17 +0000 (0:00:00.247) 0:04:42.704 **** 2025-09-11 00:55:02.830476 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.830488 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.830494 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.830500 | orchestrator | 2025-09-11 00:55:02.830506 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-11 00:55:02.830513 | orchestrator | Thursday 11 September 2025 00:49:17 +0000 (0:00:00.277) 0:04:42.981 **** 2025-09-11 00:55:02.830519 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.830525 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.830531 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.830537 | orchestrator | 2025-09-11 00:55:02.830543 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-11 00:55:02.830553 | orchestrator | Thursday 11 September 2025 00:49:18 +0000 (0:00:00.819) 0:04:43.801 **** 2025-09-11 00:55:02.830558 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.830564 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.830569 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.830574 | orchestrator | 2025-09-11 00:55:02.830580 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-11 00:55:02.830585 | orchestrator | Thursday 11 September 2025 00:49:18 +0000 (0:00:00.252) 0:04:44.053 **** 2025-09-11 00:55:02.830591 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.830596 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.830601 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.830606 | orchestrator | 2025-09-11 00:55:02.830612 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-11 00:55:02.830617 | orchestrator | Thursday 11 September 2025 00:49:19 +0000 (0:00:00.290) 0:04:44.343 **** 2025-09-11 00:55:02.830623 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.830628 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.830633 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.830639 | orchestrator | 2025-09-11 00:55:02.830644 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-11 00:55:02.830649 | orchestrator | Thursday 11 September 2025 00:49:19 +0000 (0:00:00.742) 0:04:45.086 **** 2025-09-11 00:55:02.830655 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.830660 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.830665 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.830671 | orchestrator | 2025-09-11 00:55:02.830676 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-11 00:55:02.830682 | orchestrator | Thursday 11 September 2025 00:49:20 +0000 (0:00:01.028) 0:04:46.115 **** 2025-09-11 00:55:02.830687 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.830692 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.830698 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.830703 | orchestrator | 2025-09-11 00:55:02.830708 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-11 00:55:02.830714 | orchestrator | Thursday 11 September 2025 00:49:21 +0000 (0:00:00.256) 0:04:46.371 **** 2025-09-11 00:55:02.830719 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.830724 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.830730 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.830735 | orchestrator | 2025-09-11 00:55:02.830741 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-11 00:55:02.830746 | orchestrator | Thursday 11 September 2025 00:49:21 +0000 (0:00:00.267) 0:04:46.638 **** 2025-09-11 00:55:02.830751 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.830757 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.830762 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.830767 | orchestrator | 2025-09-11 00:55:02.830776 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-11 00:55:02.830786 | orchestrator | Thursday 11 September 2025 00:49:21 +0000 (0:00:00.243) 0:04:46.882 **** 2025-09-11 00:55:02.830795 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.830804 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.830818 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.830827 | orchestrator | 2025-09-11 00:55:02.830836 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-11 00:55:02.830868 | orchestrator | Thursday 11 September 2025 00:49:22 +0000 (0:00:00.424) 0:04:47.307 **** 2025-09-11 00:55:02.830878 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.830886 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.830896 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.830904 | orchestrator | 2025-09-11 00:55:02.830910 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-11 00:55:02.830915 | orchestrator | Thursday 11 September 2025 00:49:22 +0000 (0:00:00.261) 0:04:47.568 **** 2025-09-11 00:55:02.830920 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.830926 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.830931 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.830936 | orchestrator | 2025-09-11 00:55:02.830942 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-11 00:55:02.830947 | orchestrator | Thursday 11 September 2025 00:49:22 +0000 (0:00:00.244) 0:04:47.813 **** 2025-09-11 00:55:02.830953 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.830958 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.830963 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.830969 | orchestrator | 2025-09-11 00:55:02.830974 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-11 00:55:02.830979 | orchestrator | Thursday 11 September 2025 00:49:22 +0000 (0:00:00.249) 0:04:48.062 **** 2025-09-11 00:55:02.830985 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.830990 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.830995 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.831001 | orchestrator | 2025-09-11 00:55:02.831006 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-11 00:55:02.831011 | orchestrator | Thursday 11 September 2025 00:49:23 +0000 (0:00:00.324) 0:04:48.387 **** 2025-09-11 00:55:02.831017 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.831022 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.831027 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.831033 | orchestrator | 2025-09-11 00:55:02.831038 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-11 00:55:02.831043 | orchestrator | Thursday 11 September 2025 00:49:23 +0000 (0:00:00.402) 0:04:48.790 **** 2025-09-11 00:55:02.831049 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.831054 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.831059 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.831065 | orchestrator | 2025-09-11 00:55:02.831070 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-09-11 00:55:02.831075 | orchestrator | Thursday 11 September 2025 00:49:24 +0000 (0:00:00.514) 0:04:49.304 **** 2025-09-11 00:55:02.831081 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-11 00:55:02.831086 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-11 00:55:02.831092 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-11 00:55:02.831097 | orchestrator | 2025-09-11 00:55:02.831107 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-09-11 00:55:02.831131 | orchestrator | Thursday 11 September 2025 00:49:24 +0000 (0:00:00.676) 0:04:49.981 **** 2025-09-11 00:55:02.831136 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:55:02.831142 | orchestrator | 2025-09-11 00:55:02.831147 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-09-11 00:55:02.831153 | orchestrator | Thursday 11 September 2025 00:49:25 +0000 (0:00:00.608) 0:04:50.589 **** 2025-09-11 00:55:02.831158 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:55:02.831163 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:55:02.831169 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:55:02.831178 | orchestrator | 2025-09-11 00:55:02.831184 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-09-11 00:55:02.831189 | orchestrator | Thursday 11 September 2025 00:49:25 +0000 (0:00:00.640) 0:04:51.230 **** 2025-09-11 00:55:02.831195 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.831200 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.831205 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.831210 | orchestrator | 2025-09-11 00:55:02.831216 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-09-11 00:55:02.831221 | orchestrator | Thursday 11 September 2025 00:49:26 +0000 (0:00:00.251) 0:04:51.481 **** 2025-09-11 00:55:02.831226 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-11 00:55:02.831232 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-11 00:55:02.831238 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-11 00:55:02.831243 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-09-11 00:55:02.831248 | orchestrator | 2025-09-11 00:55:02.831254 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-09-11 00:55:02.831259 | orchestrator | Thursday 11 September 2025 00:49:37 +0000 (0:00:11.248) 0:05:02.730 **** 2025-09-11 00:55:02.831264 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.831270 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.831275 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.831280 | orchestrator | 2025-09-11 00:55:02.831286 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-09-11 00:55:02.831291 | orchestrator | Thursday 11 September 2025 00:49:38 +0000 (0:00:00.537) 0:05:03.267 **** 2025-09-11 00:55:02.831296 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-11 00:55:02.831302 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-11 00:55:02.831307 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-11 00:55:02.831312 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-11 00:55:02.831318 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-11 00:55:02.831323 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-11 00:55:02.831328 | orchestrator | 2025-09-11 00:55:02.831334 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-09-11 00:55:02.831339 | orchestrator | Thursday 11 September 2025 00:49:40 +0000 (0:00:02.274) 0:05:05.541 **** 2025-09-11 00:55:02.831364 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-11 00:55:02.831370 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-11 00:55:02.831376 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-11 00:55:02.831381 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-11 00:55:02.831386 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-11 00:55:02.831392 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-11 00:55:02.831397 | orchestrator | 2025-09-11 00:55:02.831402 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-09-11 00:55:02.831408 | orchestrator | Thursday 11 September 2025 00:49:41 +0000 (0:00:01.242) 0:05:06.784 **** 2025-09-11 00:55:02.831413 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.831419 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.831424 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.831429 | orchestrator | 2025-09-11 00:55:02.831435 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-09-11 00:55:02.831440 | orchestrator | Thursday 11 September 2025 00:49:42 +0000 (0:00:00.665) 0:05:07.449 **** 2025-09-11 00:55:02.831445 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.831451 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.831456 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.831461 | orchestrator | 2025-09-11 00:55:02.831467 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-09-11 00:55:02.831472 | orchestrator | Thursday 11 September 2025 00:49:42 +0000 (0:00:00.318) 0:05:07.768 **** 2025-09-11 00:55:02.831481 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.831487 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.831492 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.831497 | orchestrator | 2025-09-11 00:55:02.831503 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-09-11 00:55:02.831508 | orchestrator | Thursday 11 September 2025 00:49:43 +0000 (0:00:00.537) 0:05:08.305 **** 2025-09-11 00:55:02.831513 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:55:02.831519 | orchestrator | 2025-09-11 00:55:02.831524 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-09-11 00:55:02.831530 | orchestrator | Thursday 11 September 2025 00:49:43 +0000 (0:00:00.496) 0:05:08.802 **** 2025-09-11 00:55:02.831535 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.831540 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.831546 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.831551 | orchestrator | 2025-09-11 00:55:02.831556 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-09-11 00:55:02.831562 | orchestrator | Thursday 11 September 2025 00:49:43 +0000 (0:00:00.320) 0:05:09.122 **** 2025-09-11 00:55:02.831567 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.831572 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.831578 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.831583 | orchestrator | 2025-09-11 00:55:02.831592 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-09-11 00:55:02.831597 | orchestrator | Thursday 11 September 2025 00:49:44 +0000 (0:00:00.406) 0:05:09.529 **** 2025-09-11 00:55:02.831603 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:55:02.831608 | orchestrator | 2025-09-11 00:55:02.831614 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-09-11 00:55:02.831619 | orchestrator | Thursday 11 September 2025 00:49:44 +0000 (0:00:00.461) 0:05:09.990 **** 2025-09-11 00:55:02.831624 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:55:02.831629 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:55:02.831635 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:55:02.831640 | orchestrator | 2025-09-11 00:55:02.831645 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-09-11 00:55:02.831651 | orchestrator | Thursday 11 September 2025 00:49:45 +0000 (0:00:01.154) 0:05:11.145 **** 2025-09-11 00:55:02.831656 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:55:02.831662 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:55:02.831667 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:55:02.831672 | orchestrator | 2025-09-11 00:55:02.831678 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-09-11 00:55:02.831683 | orchestrator | Thursday 11 September 2025 00:49:47 +0000 (0:00:01.306) 0:05:12.451 **** 2025-09-11 00:55:02.831689 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:55:02.831694 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:55:02.831699 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:55:02.831705 | orchestrator | 2025-09-11 00:55:02.831710 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-09-11 00:55:02.831715 | orchestrator | Thursday 11 September 2025 00:49:48 +0000 (0:00:01.759) 0:05:14.211 **** 2025-09-11 00:55:02.831721 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:55:02.831726 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:55:02.831731 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:55:02.831737 | orchestrator | 2025-09-11 00:55:02.831742 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-09-11 00:55:02.831747 | orchestrator | Thursday 11 September 2025 00:49:51 +0000 (0:00:02.042) 0:05:16.254 **** 2025-09-11 00:55:02.831753 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.831762 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.831767 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-09-11 00:55:02.831773 | orchestrator | 2025-09-11 00:55:02.831778 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-09-11 00:55:02.831783 | orchestrator | Thursday 11 September 2025 00:49:51 +0000 (0:00:00.405) 0:05:16.659 **** 2025-09-11 00:55:02.831789 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-09-11 00:55:02.831794 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-09-11 00:55:02.831815 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-09-11 00:55:02.831822 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-09-11 00:55:02.831827 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-09-11 00:55:02.831833 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2025-09-11 00:55:02.831838 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-11 00:55:02.831843 | orchestrator | 2025-09-11 00:55:02.831849 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-09-11 00:55:02.831854 | orchestrator | Thursday 11 September 2025 00:50:28 +0000 (0:00:36.821) 0:05:53.480 **** 2025-09-11 00:55:02.831859 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-11 00:55:02.831865 | orchestrator | 2025-09-11 00:55:02.831870 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-09-11 00:55:02.831875 | orchestrator | Thursday 11 September 2025 00:50:29 +0000 (0:00:01.304) 0:05:54.785 **** 2025-09-11 00:55:02.831881 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.831886 | orchestrator | 2025-09-11 00:55:02.831892 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-09-11 00:55:02.831897 | orchestrator | Thursday 11 September 2025 00:50:29 +0000 (0:00:00.271) 0:05:55.057 **** 2025-09-11 00:55:02.831902 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.831908 | orchestrator | 2025-09-11 00:55:02.831913 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-09-11 00:55:02.831918 | orchestrator | Thursday 11 September 2025 00:50:29 +0000 (0:00:00.133) 0:05:55.190 **** 2025-09-11 00:55:02.831924 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-09-11 00:55:02.831929 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-09-11 00:55:02.831934 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-09-11 00:55:02.831939 | orchestrator | 2025-09-11 00:55:02.831945 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-09-11 00:55:02.831950 | orchestrator | Thursday 11 September 2025 00:50:36 +0000 (0:00:06.442) 0:06:01.633 **** 2025-09-11 00:55:02.831959 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-09-11 00:55:02.831968 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-09-11 00:55:02.831978 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-09-11 00:55:02.831986 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-09-11 00:55:02.831994 | orchestrator | 2025-09-11 00:55:02.832006 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-11 00:55:02.832016 | orchestrator | Thursday 11 September 2025 00:50:41 +0000 (0:00:04.996) 0:06:06.630 **** 2025-09-11 00:55:02.832025 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:55:02.832035 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:55:02.832042 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:55:02.832047 | orchestrator | 2025-09-11 00:55:02.832052 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-11 00:55:02.832062 | orchestrator | Thursday 11 September 2025 00:50:42 +0000 (0:00:00.806) 0:06:07.437 **** 2025-09-11 00:55:02.832067 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:55:02.832073 | orchestrator | 2025-09-11 00:55:02.832078 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-11 00:55:02.832083 | orchestrator | Thursday 11 September 2025 00:50:42 +0000 (0:00:00.453) 0:06:07.890 **** 2025-09-11 00:55:02.832089 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.832094 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.832099 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.832105 | orchestrator | 2025-09-11 00:55:02.832128 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-11 00:55:02.832134 | orchestrator | Thursday 11 September 2025 00:50:42 +0000 (0:00:00.268) 0:06:08.159 **** 2025-09-11 00:55:02.832140 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:55:02.832145 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:55:02.832151 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:55:02.832156 | orchestrator | 2025-09-11 00:55:02.832161 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-11 00:55:02.832166 | orchestrator | Thursday 11 September 2025 00:50:44 +0000 (0:00:01.182) 0:06:09.341 **** 2025-09-11 00:55:02.832172 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-11 00:55:02.832177 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-11 00:55:02.832182 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-11 00:55:02.832188 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.832193 | orchestrator | 2025-09-11 00:55:02.832198 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-11 00:55:02.832204 | orchestrator | Thursday 11 September 2025 00:50:44 +0000 (0:00:00.458) 0:06:09.799 **** 2025-09-11 00:55:02.832209 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.832214 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.832220 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.832225 | orchestrator | 2025-09-11 00:55:02.832230 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-09-11 00:55:02.832236 | orchestrator | 2025-09-11 00:55:02.832241 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-11 00:55:02.832246 | orchestrator | Thursday 11 September 2025 00:50:44 +0000 (0:00:00.392) 0:06:10.192 **** 2025-09-11 00:55:02.832252 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:55:02.832257 | orchestrator | 2025-09-11 00:55:02.832282 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-11 00:55:02.832288 | orchestrator | Thursday 11 September 2025 00:50:45 +0000 (0:00:00.532) 0:06:10.725 **** 2025-09-11 00:55:02.832294 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:55:02.832299 | orchestrator | 2025-09-11 00:55:02.832304 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-11 00:55:02.832310 | orchestrator | Thursday 11 September 2025 00:50:45 +0000 (0:00:00.474) 0:06:11.199 **** 2025-09-11 00:55:02.832315 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.832320 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.832326 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.832331 | orchestrator | 2025-09-11 00:55:02.832336 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-11 00:55:02.832341 | orchestrator | Thursday 11 September 2025 00:50:46 +0000 (0:00:00.263) 0:06:11.463 **** 2025-09-11 00:55:02.832347 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.832352 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.832357 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.832366 | orchestrator | 2025-09-11 00:55:02.832372 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-11 00:55:02.832377 | orchestrator | Thursday 11 September 2025 00:50:47 +0000 (0:00:00.801) 0:06:12.265 **** 2025-09-11 00:55:02.832382 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.832388 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.832393 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.832398 | orchestrator | 2025-09-11 00:55:02.832404 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-11 00:55:02.832409 | orchestrator | Thursday 11 September 2025 00:50:47 +0000 (0:00:00.670) 0:06:12.936 **** 2025-09-11 00:55:02.832414 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.832420 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.832425 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.832430 | orchestrator | 2025-09-11 00:55:02.832436 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-11 00:55:02.832441 | orchestrator | Thursday 11 September 2025 00:50:48 +0000 (0:00:00.660) 0:06:13.596 **** 2025-09-11 00:55:02.832446 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.832452 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.832457 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.832462 | orchestrator | 2025-09-11 00:55:02.832467 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-11 00:55:02.832473 | orchestrator | Thursday 11 September 2025 00:50:48 +0000 (0:00:00.255) 0:06:13.851 **** 2025-09-11 00:55:02.832478 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.832483 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.832489 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.832494 | orchestrator | 2025-09-11 00:55:02.832505 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-11 00:55:02.832510 | orchestrator | Thursday 11 September 2025 00:50:49 +0000 (0:00:00.403) 0:06:14.255 **** 2025-09-11 00:55:02.832516 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.832521 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.832526 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.832532 | orchestrator | 2025-09-11 00:55:02.832537 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-11 00:55:02.832542 | orchestrator | Thursday 11 September 2025 00:50:49 +0000 (0:00:00.274) 0:06:14.529 **** 2025-09-11 00:55:02.832547 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.832553 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.832558 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.832563 | orchestrator | 2025-09-11 00:55:02.832569 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-11 00:55:02.832574 | orchestrator | Thursday 11 September 2025 00:50:49 +0000 (0:00:00.707) 0:06:15.237 **** 2025-09-11 00:55:02.832579 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.832585 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.832590 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.832595 | orchestrator | 2025-09-11 00:55:02.832600 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-11 00:55:02.832606 | orchestrator | Thursday 11 September 2025 00:50:50 +0000 (0:00:00.676) 0:06:15.913 **** 2025-09-11 00:55:02.832611 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.832616 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.832621 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.832627 | orchestrator | 2025-09-11 00:55:02.832632 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-11 00:55:02.832637 | orchestrator | Thursday 11 September 2025 00:50:51 +0000 (0:00:00.420) 0:06:16.334 **** 2025-09-11 00:55:02.832643 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.832648 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.832653 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.832659 | orchestrator | 2025-09-11 00:55:02.832664 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-11 00:55:02.832673 | orchestrator | Thursday 11 September 2025 00:50:51 +0000 (0:00:00.267) 0:06:16.601 **** 2025-09-11 00:55:02.832678 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.832684 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.832689 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.832694 | orchestrator | 2025-09-11 00:55:02.832699 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-11 00:55:02.832705 | orchestrator | Thursday 11 September 2025 00:50:51 +0000 (0:00:00.273) 0:06:16.875 **** 2025-09-11 00:55:02.832710 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.832715 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.832721 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.832726 | orchestrator | 2025-09-11 00:55:02.832731 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-11 00:55:02.832737 | orchestrator | Thursday 11 September 2025 00:50:51 +0000 (0:00:00.269) 0:06:17.144 **** 2025-09-11 00:55:02.832742 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.832747 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.832752 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.832758 | orchestrator | 2025-09-11 00:55:02.832763 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-11 00:55:02.832771 | orchestrator | Thursday 11 September 2025 00:50:52 +0000 (0:00:00.429) 0:06:17.573 **** 2025-09-11 00:55:02.832777 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.832782 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.832787 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.832792 | orchestrator | 2025-09-11 00:55:02.832798 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-11 00:55:02.832803 | orchestrator | Thursday 11 September 2025 00:50:52 +0000 (0:00:00.262) 0:06:17.836 **** 2025-09-11 00:55:02.832808 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.832814 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.832819 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.832824 | orchestrator | 2025-09-11 00:55:02.832830 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-11 00:55:02.832835 | orchestrator | Thursday 11 September 2025 00:50:52 +0000 (0:00:00.256) 0:06:18.093 **** 2025-09-11 00:55:02.832840 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.832845 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.832851 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.832856 | orchestrator | 2025-09-11 00:55:02.832861 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-11 00:55:02.832866 | orchestrator | Thursday 11 September 2025 00:50:53 +0000 (0:00:00.256) 0:06:18.349 **** 2025-09-11 00:55:02.832872 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.832877 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.832882 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.832888 | orchestrator | 2025-09-11 00:55:02.832893 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-11 00:55:02.832898 | orchestrator | Thursday 11 September 2025 00:50:53 +0000 (0:00:00.442) 0:06:18.791 **** 2025-09-11 00:55:02.832904 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.832909 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.832914 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.832919 | orchestrator | 2025-09-11 00:55:02.832925 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-09-11 00:55:02.832930 | orchestrator | Thursday 11 September 2025 00:50:54 +0000 (0:00:00.473) 0:06:19.265 **** 2025-09-11 00:55:02.832935 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.832941 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.832946 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.832951 | orchestrator | 2025-09-11 00:55:02.832956 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-09-11 00:55:02.832962 | orchestrator | Thursday 11 September 2025 00:50:54 +0000 (0:00:00.269) 0:06:19.535 **** 2025-09-11 00:55:02.832970 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-11 00:55:02.832976 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-11 00:55:02.832981 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-11 00:55:02.832986 | orchestrator | 2025-09-11 00:55:02.832994 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-09-11 00:55:02.833000 | orchestrator | Thursday 11 September 2025 00:50:55 +0000 (0:00:00.739) 0:06:20.274 **** 2025-09-11 00:55:02.833005 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-5, testbed-node-4 2025-09-11 00:55:02.833010 | orchestrator | 2025-09-11 00:55:02.833016 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-09-11 00:55:02.833021 | orchestrator | Thursday 11 September 2025 00:50:55 +0000 (0:00:00.623) 0:06:20.898 **** 2025-09-11 00:55:02.833026 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.833032 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.833037 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.833042 | orchestrator | 2025-09-11 00:55:02.833048 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-09-11 00:55:02.833053 | orchestrator | Thursday 11 September 2025 00:50:55 +0000 (0:00:00.263) 0:06:21.162 **** 2025-09-11 00:55:02.833058 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.833064 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.833069 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.833074 | orchestrator | 2025-09-11 00:55:02.833080 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-09-11 00:55:02.833085 | orchestrator | Thursday 11 September 2025 00:50:56 +0000 (0:00:00.259) 0:06:21.421 **** 2025-09-11 00:55:02.833090 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.833096 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.833101 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.833106 | orchestrator | 2025-09-11 00:55:02.833128 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-09-11 00:55:02.833134 | orchestrator | Thursday 11 September 2025 00:50:56 +0000 (0:00:00.800) 0:06:22.221 **** 2025-09-11 00:55:02.833139 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.833144 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.833150 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.833155 | orchestrator | 2025-09-11 00:55:02.833160 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-09-11 00:55:02.833166 | orchestrator | Thursday 11 September 2025 00:50:57 +0000 (0:00:00.353) 0:06:22.575 **** 2025-09-11 00:55:02.833171 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-11 00:55:02.833177 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-11 00:55:02.833182 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-11 00:55:02.833187 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-11 00:55:02.833193 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-11 00:55:02.833198 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-11 00:55:02.833208 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-11 00:55:02.833213 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-11 00:55:02.833219 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-11 00:55:02.833224 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-11 00:55:02.833229 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-11 00:55:02.833238 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-11 00:55:02.833244 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-11 00:55:02.833249 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-11 00:55:02.833254 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-11 00:55:02.833260 | orchestrator | 2025-09-11 00:55:02.833265 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-09-11 00:55:02.833270 | orchestrator | Thursday 11 September 2025 00:50:59 +0000 (0:00:02.141) 0:06:24.716 **** 2025-09-11 00:55:02.833276 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.833281 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.833286 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.833292 | orchestrator | 2025-09-11 00:55:02.833297 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-09-11 00:55:02.833302 | orchestrator | Thursday 11 September 2025 00:50:59 +0000 (0:00:00.291) 0:06:25.007 **** 2025-09-11 00:55:02.833307 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:55:02.833313 | orchestrator | 2025-09-11 00:55:02.833318 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-09-11 00:55:02.833323 | orchestrator | Thursday 11 September 2025 00:51:00 +0000 (0:00:00.801) 0:06:25.809 **** 2025-09-11 00:55:02.833329 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-11 00:55:02.833334 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-11 00:55:02.833339 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-11 00:55:02.833345 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-09-11 00:55:02.833350 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-09-11 00:55:02.833356 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-09-11 00:55:02.833361 | orchestrator | 2025-09-11 00:55:02.833369 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-09-11 00:55:02.833375 | orchestrator | Thursday 11 September 2025 00:51:01 +0000 (0:00:01.004) 0:06:26.813 **** 2025-09-11 00:55:02.833380 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-11 00:55:02.833386 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-11 00:55:02.833391 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-11 00:55:02.833396 | orchestrator | 2025-09-11 00:55:02.833402 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-09-11 00:55:02.833407 | orchestrator | Thursday 11 September 2025 00:51:03 +0000 (0:00:02.173) 0:06:28.987 **** 2025-09-11 00:55:02.833412 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-11 00:55:02.833418 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-11 00:55:02.833423 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:55:02.833428 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-11 00:55:02.833434 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-11 00:55:02.833439 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-11 00:55:02.833444 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:55:02.833450 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-11 00:55:02.833455 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:55:02.833460 | orchestrator | 2025-09-11 00:55:02.833465 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-09-11 00:55:02.833471 | orchestrator | Thursday 11 September 2025 00:51:05 +0000 (0:00:01.394) 0:06:30.382 **** 2025-09-11 00:55:02.833476 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-11 00:55:02.833481 | orchestrator | 2025-09-11 00:55:02.833487 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-09-11 00:55:02.833495 | orchestrator | Thursday 11 September 2025 00:51:07 +0000 (0:00:02.528) 0:06:32.911 **** 2025-09-11 00:55:02.833501 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:55:02.833506 | orchestrator | 2025-09-11 00:55:02.833511 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-09-11 00:55:02.833517 | orchestrator | Thursday 11 September 2025 00:51:08 +0000 (0:00:00.533) 0:06:33.444 **** 2025-09-11 00:55:02.833522 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-344fe78f-9b90-543d-a55e-ac4ca1a09e29', 'data_vg': 'ceph-344fe78f-9b90-543d-a55e-ac4ca1a09e29'}) 2025-09-11 00:55:02.833528 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7', 'data_vg': 'ceph-7f9f8cff-4bc3-57f6-8883-7f2afe56eba7'}) 2025-09-11 00:55:02.833533 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1fcfbff8-db79-5f3f-a505-ec8e716f38d6', 'data_vg': 'ceph-1fcfbff8-db79-5f3f-a505-ec8e716f38d6'}) 2025-09-11 00:55:02.833542 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8a3e2512-7b8b-5f78-845d-17a09314c972', 'data_vg': 'ceph-8a3e2512-7b8b-5f78-845d-17a09314c972'}) 2025-09-11 00:55:02.833548 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2', 'data_vg': 'ceph-4b4178b7-2f3b-5f27-b2b6-7c3306310ac2'}) 2025-09-11 00:55:02.833553 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-0befa402-ebd4-5a4e-889f-8c71805f12b9', 'data_vg': 'ceph-0befa402-ebd4-5a4e-889f-8c71805f12b9'}) 2025-09-11 00:55:02.833558 | orchestrator | 2025-09-11 00:55:02.833564 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-09-11 00:55:02.833569 | orchestrator | Thursday 11 September 2025 00:51:46 +0000 (0:00:38.335) 0:07:11.779 **** 2025-09-11 00:55:02.833575 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.833580 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.833585 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.833590 | orchestrator | 2025-09-11 00:55:02.833596 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-09-11 00:55:02.833601 | orchestrator | Thursday 11 September 2025 00:51:47 +0000 (0:00:00.545) 0:07:12.325 **** 2025-09-11 00:55:02.833606 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:55:02.833612 | orchestrator | 2025-09-11 00:55:02.833617 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-09-11 00:55:02.833622 | orchestrator | Thursday 11 September 2025 00:51:47 +0000 (0:00:00.508) 0:07:12.834 **** 2025-09-11 00:55:02.833628 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.833633 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.833638 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.833644 | orchestrator | 2025-09-11 00:55:02.833649 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-09-11 00:55:02.833654 | orchestrator | Thursday 11 September 2025 00:51:48 +0000 (0:00:00.644) 0:07:13.478 **** 2025-09-11 00:55:02.833660 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.833665 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.833671 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.833676 | orchestrator | 2025-09-11 00:55:02.833681 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-09-11 00:55:02.833687 | orchestrator | Thursday 11 September 2025 00:51:51 +0000 (0:00:02.956) 0:07:16.435 **** 2025-09-11 00:55:02.833692 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:55:02.833697 | orchestrator | 2025-09-11 00:55:02.833703 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-09-11 00:55:02.833708 | orchestrator | Thursday 11 September 2025 00:51:51 +0000 (0:00:00.536) 0:07:16.972 **** 2025-09-11 00:55:02.833720 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:55:02.833725 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:55:02.833731 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:55:02.833736 | orchestrator | 2025-09-11 00:55:02.833741 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-09-11 00:55:02.833747 | orchestrator | Thursday 11 September 2025 00:51:52 +0000 (0:00:01.122) 0:07:18.095 **** 2025-09-11 00:55:02.833752 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:55:02.833757 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:55:02.833762 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:55:02.833768 | orchestrator | 2025-09-11 00:55:02.833773 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-09-11 00:55:02.833778 | orchestrator | Thursday 11 September 2025 00:51:54 +0000 (0:00:01.398) 0:07:19.493 **** 2025-09-11 00:55:02.833784 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:55:02.833789 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:55:02.833794 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:55:02.833799 | orchestrator | 2025-09-11 00:55:02.833805 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-09-11 00:55:02.833810 | orchestrator | Thursday 11 September 2025 00:51:55 +0000 (0:00:01.744) 0:07:21.238 **** 2025-09-11 00:55:02.833816 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.833821 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.833826 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.833832 | orchestrator | 2025-09-11 00:55:02.833837 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-09-11 00:55:02.833842 | orchestrator | Thursday 11 September 2025 00:51:56 +0000 (0:00:00.381) 0:07:21.619 **** 2025-09-11 00:55:02.833848 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.833853 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.833858 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.833863 | orchestrator | 2025-09-11 00:55:02.833869 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-09-11 00:55:02.833874 | orchestrator | Thursday 11 September 2025 00:51:56 +0000 (0:00:00.314) 0:07:21.934 **** 2025-09-11 00:55:02.833879 | orchestrator | ok: [testbed-node-3] => (item=1) 2025-09-11 00:55:02.833885 | orchestrator | ok: [testbed-node-4] => (item=2) 2025-09-11 00:55:02.833890 | orchestrator | ok: [testbed-node-3] => (item=4) 2025-09-11 00:55:02.833895 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-11 00:55:02.833901 | orchestrator | ok: [testbed-node-4] => (item=3) 2025-09-11 00:55:02.833906 | orchestrator | ok: [testbed-node-5] => (item=5) 2025-09-11 00:55:02.833911 | orchestrator | 2025-09-11 00:55:02.833916 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-09-11 00:55:02.833922 | orchestrator | Thursday 11 September 2025 00:51:57 +0000 (0:00:01.286) 0:07:23.221 **** 2025-09-11 00:55:02.833927 | orchestrator | changed: [testbed-node-3] => (item=1) 2025-09-11 00:55:02.833932 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-09-11 00:55:02.833938 | orchestrator | changed: [testbed-node-5] => (item=0) 2025-09-11 00:55:02.833943 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-09-11 00:55:02.833948 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-09-11 00:55:02.833954 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-09-11 00:55:02.833959 | orchestrator | 2025-09-11 00:55:02.833967 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-09-11 00:55:02.833972 | orchestrator | Thursday 11 September 2025 00:52:00 +0000 (0:00:02.086) 0:07:25.307 **** 2025-09-11 00:55:02.833978 | orchestrator | changed: [testbed-node-3] => (item=1) 2025-09-11 00:55:02.833983 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-09-11 00:55:02.833988 | orchestrator | changed: [testbed-node-5] => (item=0) 2025-09-11 00:55:02.833994 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-09-11 00:55:02.833999 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-09-11 00:55:02.834004 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-09-11 00:55:02.834031 | orchestrator | 2025-09-11 00:55:02.834038 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-09-11 00:55:02.834044 | orchestrator | Thursday 11 September 2025 00:52:03 +0000 (0:00:03.603) 0:07:28.910 **** 2025-09-11 00:55:02.834049 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.834055 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.834060 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-11 00:55:02.834065 | orchestrator | 2025-09-11 00:55:02.834071 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-09-11 00:55:02.834076 | orchestrator | Thursday 11 September 2025 00:52:06 +0000 (0:00:03.057) 0:07:31.967 **** 2025-09-11 00:55:02.834082 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.834087 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.834092 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-09-11 00:55:02.834098 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-11 00:55:02.834103 | orchestrator | 2025-09-11 00:55:02.834141 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-09-11 00:55:02.834148 | orchestrator | Thursday 11 September 2025 00:52:19 +0000 (0:00:12.590) 0:07:44.558 **** 2025-09-11 00:55:02.834153 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.834159 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.834164 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.834169 | orchestrator | 2025-09-11 00:55:02.834175 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-11 00:55:02.834180 | orchestrator | Thursday 11 September 2025 00:52:20 +0000 (0:00:00.810) 0:07:45.369 **** 2025-09-11 00:55:02.834185 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.834191 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.834196 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.834201 | orchestrator | 2025-09-11 00:55:02.834207 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-11 00:55:02.834212 | orchestrator | Thursday 11 September 2025 00:52:20 +0000 (0:00:00.595) 0:07:45.965 **** 2025-09-11 00:55:02.834217 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:55:02.834223 | orchestrator | 2025-09-11 00:55:02.834231 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-11 00:55:02.834237 | orchestrator | Thursday 11 September 2025 00:52:21 +0000 (0:00:00.524) 0:07:46.489 **** 2025-09-11 00:55:02.834242 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-11 00:55:02.834248 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-11 00:55:02.834252 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-11 00:55:02.834257 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.834262 | orchestrator | 2025-09-11 00:55:02.834266 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-11 00:55:02.834271 | orchestrator | Thursday 11 September 2025 00:52:21 +0000 (0:00:00.385) 0:07:46.875 **** 2025-09-11 00:55:02.834276 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.834281 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.834285 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.834290 | orchestrator | 2025-09-11 00:55:02.834295 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-11 00:55:02.834300 | orchestrator | Thursday 11 September 2025 00:52:21 +0000 (0:00:00.298) 0:07:47.173 **** 2025-09-11 00:55:02.834304 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.834309 | orchestrator | 2025-09-11 00:55:02.834314 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-11 00:55:02.834318 | orchestrator | Thursday 11 September 2025 00:52:22 +0000 (0:00:00.696) 0:07:47.870 **** 2025-09-11 00:55:02.834323 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.834332 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.834337 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.834342 | orchestrator | 2025-09-11 00:55:02.834347 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-11 00:55:02.834351 | orchestrator | Thursday 11 September 2025 00:52:22 +0000 (0:00:00.298) 0:07:48.169 **** 2025-09-11 00:55:02.834356 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.834361 | orchestrator | 2025-09-11 00:55:02.834366 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-11 00:55:02.834370 | orchestrator | Thursday 11 September 2025 00:52:23 +0000 (0:00:00.224) 0:07:48.393 **** 2025-09-11 00:55:02.834375 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.834380 | orchestrator | 2025-09-11 00:55:02.834384 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-11 00:55:02.834389 | orchestrator | Thursday 11 September 2025 00:52:23 +0000 (0:00:00.226) 0:07:48.620 **** 2025-09-11 00:55:02.834394 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.834399 | orchestrator | 2025-09-11 00:55:02.834403 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-11 00:55:02.834408 | orchestrator | Thursday 11 September 2025 00:52:23 +0000 (0:00:00.146) 0:07:48.766 **** 2025-09-11 00:55:02.834413 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.834417 | orchestrator | 2025-09-11 00:55:02.834422 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-11 00:55:02.834427 | orchestrator | Thursday 11 September 2025 00:52:23 +0000 (0:00:00.203) 0:07:48.970 **** 2025-09-11 00:55:02.834436 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.834440 | orchestrator | 2025-09-11 00:55:02.834445 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-11 00:55:02.834450 | orchestrator | Thursday 11 September 2025 00:52:23 +0000 (0:00:00.200) 0:07:49.170 **** 2025-09-11 00:55:02.834455 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-11 00:55:02.834460 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-11 00:55:02.834464 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-11 00:55:02.834469 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.834474 | orchestrator | 2025-09-11 00:55:02.834479 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-11 00:55:02.834483 | orchestrator | Thursday 11 September 2025 00:52:24 +0000 (0:00:00.375) 0:07:49.546 **** 2025-09-11 00:55:02.834488 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.834493 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.834497 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.834502 | orchestrator | 2025-09-11 00:55:02.834507 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-11 00:55:02.834512 | orchestrator | Thursday 11 September 2025 00:52:24 +0000 (0:00:00.305) 0:07:49.851 **** 2025-09-11 00:55:02.834516 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.834521 | orchestrator | 2025-09-11 00:55:02.834526 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-11 00:55:02.834531 | orchestrator | Thursday 11 September 2025 00:52:25 +0000 (0:00:00.802) 0:07:50.654 **** 2025-09-11 00:55:02.834536 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.834540 | orchestrator | 2025-09-11 00:55:02.834545 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-09-11 00:55:02.834550 | orchestrator | 2025-09-11 00:55:02.834555 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-11 00:55:02.834559 | orchestrator | Thursday 11 September 2025 00:52:26 +0000 (0:00:00.669) 0:07:51.323 **** 2025-09-11 00:55:02.834564 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-0, testbed-node-1, testbed-node-5, testbed-node-2 2025-09-11 00:55:02.834570 | orchestrator | 2025-09-11 00:55:02.834574 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-11 00:55:02.834583 | orchestrator | Thursday 11 September 2025 00:52:27 +0000 (0:00:01.304) 0:07:52.628 **** 2025-09-11 00:55:02.834588 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:55:02.834593 | orchestrator | 2025-09-11 00:55:02.834597 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-11 00:55:02.834605 | orchestrator | Thursday 11 September 2025 00:52:28 +0000 (0:00:01.171) 0:07:53.800 **** 2025-09-11 00:55:02.834610 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.834615 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.834619 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.834624 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.834629 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.834634 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.834638 | orchestrator | 2025-09-11 00:55:02.834643 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-11 00:55:02.834648 | orchestrator | Thursday 11 September 2025 00:52:29 +0000 (0:00:01.214) 0:07:55.014 **** 2025-09-11 00:55:02.834652 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.834657 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.834662 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.834666 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.834671 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.834676 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.834681 | orchestrator | 2025-09-11 00:55:02.834685 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-11 00:55:02.834690 | orchestrator | Thursday 11 September 2025 00:52:30 +0000 (0:00:00.720) 0:07:55.734 **** 2025-09-11 00:55:02.834695 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.834699 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.834704 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.834709 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.834714 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.834718 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.834723 | orchestrator | 2025-09-11 00:55:02.834728 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-11 00:55:02.834732 | orchestrator | Thursday 11 September 2025 00:52:31 +0000 (0:00:00.955) 0:07:56.690 **** 2025-09-11 00:55:02.834737 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.834742 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.834747 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.834751 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.834756 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.834761 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.834766 | orchestrator | 2025-09-11 00:55:02.834770 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-11 00:55:02.834775 | orchestrator | Thursday 11 September 2025 00:52:32 +0000 (0:00:00.733) 0:07:57.423 **** 2025-09-11 00:55:02.834780 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.834785 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.834789 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.834794 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.834799 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.834803 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.834808 | orchestrator | 2025-09-11 00:55:02.834813 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-11 00:55:02.834818 | orchestrator | Thursday 11 September 2025 00:52:33 +0000 (0:00:01.350) 0:07:58.774 **** 2025-09-11 00:55:02.834822 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.834827 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.834832 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.834837 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.834841 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.834851 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.834856 | orchestrator | 2025-09-11 00:55:02.834861 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-11 00:55:02.834866 | orchestrator | Thursday 11 September 2025 00:52:34 +0000 (0:00:00.602) 0:07:59.376 **** 2025-09-11 00:55:02.834871 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.834875 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.834880 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.834885 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.834890 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.834894 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.834899 | orchestrator | 2025-09-11 00:55:02.834904 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-11 00:55:02.834909 | orchestrator | Thursday 11 September 2025 00:52:34 +0000 (0:00:00.571) 0:07:59.948 **** 2025-09-11 00:55:02.834913 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.834918 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.834923 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.834928 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.834932 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.834937 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.834942 | orchestrator | 2025-09-11 00:55:02.834946 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-11 00:55:02.834951 | orchestrator | Thursday 11 September 2025 00:52:36 +0000 (0:00:01.400) 0:08:01.349 **** 2025-09-11 00:55:02.834956 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.834960 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.834965 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.834970 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.834975 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.834979 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.834984 | orchestrator | 2025-09-11 00:55:02.834989 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-11 00:55:02.834994 | orchestrator | Thursday 11 September 2025 00:52:37 +0000 (0:00:00.962) 0:08:02.312 **** 2025-09-11 00:55:02.834998 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.835003 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.835008 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.835012 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.835017 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.835022 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.835027 | orchestrator | 2025-09-11 00:55:02.835031 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-11 00:55:02.835036 | orchestrator | Thursday 11 September 2025 00:52:37 +0000 (0:00:00.776) 0:08:03.089 **** 2025-09-11 00:55:02.835041 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.835046 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.835050 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.835055 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.835060 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.835064 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.835069 | orchestrator | 2025-09-11 00:55:02.835078 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-11 00:55:02.835083 | orchestrator | Thursday 11 September 2025 00:52:38 +0000 (0:00:00.559) 0:08:03.648 **** 2025-09-11 00:55:02.835088 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.835092 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.835097 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.835102 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.835107 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.835122 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.835127 | orchestrator | 2025-09-11 00:55:02.835132 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-11 00:55:02.835137 | orchestrator | Thursday 11 September 2025 00:52:39 +0000 (0:00:00.833) 0:08:04.481 **** 2025-09-11 00:55:02.835145 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.835150 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.835155 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.835159 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.835164 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.835169 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.835174 | orchestrator | 2025-09-11 00:55:02.835179 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-11 00:55:02.835183 | orchestrator | Thursday 11 September 2025 00:52:39 +0000 (0:00:00.594) 0:08:05.076 **** 2025-09-11 00:55:02.835188 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.835193 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.835198 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.835203 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.835207 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.835212 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.835217 | orchestrator | 2025-09-11 00:55:02.835222 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-11 00:55:02.835226 | orchestrator | Thursday 11 September 2025 00:52:40 +0000 (0:00:00.818) 0:08:05.895 **** 2025-09-11 00:55:02.835231 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.835236 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.835240 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.835245 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.835250 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.835255 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.835259 | orchestrator | 2025-09-11 00:55:02.835264 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-11 00:55:02.835269 | orchestrator | Thursday 11 September 2025 00:52:41 +0000 (0:00:00.564) 0:08:06.459 **** 2025-09-11 00:55:02.835274 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.835278 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.835283 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.835288 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:02.835292 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:02.835297 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:02.835302 | orchestrator | 2025-09-11 00:55:02.835307 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-11 00:55:02.835311 | orchestrator | Thursday 11 September 2025 00:52:41 +0000 (0:00:00.760) 0:08:07.220 **** 2025-09-11 00:55:02.835316 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.835321 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.835326 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.835333 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.835338 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.835343 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.835347 | orchestrator | 2025-09-11 00:55:02.835352 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-11 00:55:02.835357 | orchestrator | Thursday 11 September 2025 00:52:42 +0000 (0:00:00.610) 0:08:07.831 **** 2025-09-11 00:55:02.835362 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.835366 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.835371 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.835376 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.835381 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.835385 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.835390 | orchestrator | 2025-09-11 00:55:02.835395 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-11 00:55:02.835400 | orchestrator | Thursday 11 September 2025 00:52:43 +0000 (0:00:00.963) 0:08:08.794 **** 2025-09-11 00:55:02.835404 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.835409 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.835414 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.835418 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.835426 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.835431 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.835436 | orchestrator | 2025-09-11 00:55:02.835441 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-09-11 00:55:02.835445 | orchestrator | Thursday 11 September 2025 00:52:44 +0000 (0:00:01.370) 0:08:10.165 **** 2025-09-11 00:55:02.835450 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-11 00:55:02.835455 | orchestrator | 2025-09-11 00:55:02.835460 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-09-11 00:55:02.835465 | orchestrator | Thursday 11 September 2025 00:52:49 +0000 (0:00:04.159) 0:08:14.324 **** 2025-09-11 00:55:02.835469 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-11 00:55:02.835474 | orchestrator | 2025-09-11 00:55:02.835479 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-09-11 00:55:02.835483 | orchestrator | Thursday 11 September 2025 00:52:51 +0000 (0:00:02.130) 0:08:16.455 **** 2025-09-11 00:55:02.835488 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:55:02.835493 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:55:02.835498 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:55:02.835502 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.835507 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:55:02.835512 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:55:02.835516 | orchestrator | 2025-09-11 00:55:02.835521 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-09-11 00:55:02.835526 | orchestrator | Thursday 11 September 2025 00:52:52 +0000 (0:00:01.499) 0:08:17.954 **** 2025-09-11 00:55:02.835531 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:55:02.835535 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:55:02.835540 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:55:02.835545 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:55:02.835549 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:55:02.835557 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:55:02.835562 | orchestrator | 2025-09-11 00:55:02.835567 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-09-11 00:55:02.835571 | orchestrator | Thursday 11 September 2025 00:52:53 +0000 (0:00:01.026) 0:08:18.981 **** 2025-09-11 00:55:02.835576 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:55:02.835581 | orchestrator | 2025-09-11 00:55:02.835586 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-09-11 00:55:02.835591 | orchestrator | Thursday 11 September 2025 00:52:54 +0000 (0:00:01.044) 0:08:20.025 **** 2025-09-11 00:55:02.835596 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:55:02.835600 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:55:02.835605 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:55:02.835610 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:55:02.835614 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:55:02.835619 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:55:02.835624 | orchestrator | 2025-09-11 00:55:02.835629 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-09-11 00:55:02.835633 | orchestrator | Thursday 11 September 2025 00:52:56 +0000 (0:00:01.366) 0:08:21.392 **** 2025-09-11 00:55:02.835638 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:55:02.835643 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:55:02.835647 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:55:02.835652 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:55:02.835657 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:55:02.835661 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:55:02.835666 | orchestrator | 2025-09-11 00:55:02.835671 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-09-11 00:55:02.835676 | orchestrator | Thursday 11 September 2025 00:52:59 +0000 (0:00:03.759) 0:08:25.152 **** 2025-09-11 00:55:02.835684 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:55:02.835689 | orchestrator | 2025-09-11 00:55:02.835693 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-09-11 00:55:02.835698 | orchestrator | Thursday 11 September 2025 00:53:01 +0000 (0:00:01.103) 0:08:26.256 **** 2025-09-11 00:55:02.835703 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.835708 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.835713 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.835717 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.835722 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.835727 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.835732 | orchestrator | 2025-09-11 00:55:02.835736 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-09-11 00:55:02.835741 | orchestrator | Thursday 11 September 2025 00:53:01 +0000 (0:00:00.551) 0:08:26.808 **** 2025-09-11 00:55:02.835746 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:55:02.835751 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:55:02.835755 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:55:02.835762 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:55:02.835767 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:55:02.835772 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:55:02.835777 | orchestrator | 2025-09-11 00:55:02.835782 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-09-11 00:55:02.835786 | orchestrator | Thursday 11 September 2025 00:53:03 +0000 (0:00:02.325) 0:08:29.133 **** 2025-09-11 00:55:02.835791 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.835796 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.835801 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.835806 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:02.835810 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:02.835815 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:02.835820 | orchestrator | 2025-09-11 00:55:02.835824 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-09-11 00:55:02.835829 | orchestrator | 2025-09-11 00:55:02.835834 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-11 00:55:02.835839 | orchestrator | Thursday 11 September 2025 00:53:04 +0000 (0:00:00.734) 0:08:29.868 **** 2025-09-11 00:55:02.835844 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:55:02.835848 | orchestrator | 2025-09-11 00:55:02.835853 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-11 00:55:02.835858 | orchestrator | Thursday 11 September 2025 00:53:05 +0000 (0:00:00.693) 0:08:30.562 **** 2025-09-11 00:55:02.835863 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:55:02.835868 | orchestrator | 2025-09-11 00:55:02.835872 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-11 00:55:02.835877 | orchestrator | Thursday 11 September 2025 00:53:05 +0000 (0:00:00.551) 0:08:31.114 **** 2025-09-11 00:55:02.835882 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.835887 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.835891 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.835896 | orchestrator | 2025-09-11 00:55:02.835901 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-11 00:55:02.835906 | orchestrator | Thursday 11 September 2025 00:53:06 +0000 (0:00:00.417) 0:08:31.531 **** 2025-09-11 00:55:02.835910 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.835915 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.835920 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.835925 | orchestrator | 2025-09-11 00:55:02.835929 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-11 00:55:02.835937 | orchestrator | Thursday 11 September 2025 00:53:06 +0000 (0:00:00.650) 0:08:32.181 **** 2025-09-11 00:55:02.835942 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.835947 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.835952 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.835957 | orchestrator | 2025-09-11 00:55:02.835964 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-11 00:55:02.835969 | orchestrator | Thursday 11 September 2025 00:53:07 +0000 (0:00:00.644) 0:08:32.825 **** 2025-09-11 00:55:02.835974 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.835979 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.835983 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.835988 | orchestrator | 2025-09-11 00:55:02.835993 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-11 00:55:02.835998 | orchestrator | Thursday 11 September 2025 00:53:08 +0000 (0:00:00.755) 0:08:33.581 **** 2025-09-11 00:55:02.836002 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.836007 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.836012 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.836017 | orchestrator | 2025-09-11 00:55:02.836022 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-11 00:55:02.836026 | orchestrator | Thursday 11 September 2025 00:53:08 +0000 (0:00:00.415) 0:08:33.997 **** 2025-09-11 00:55:02.836031 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.836036 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.836041 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.836045 | orchestrator | 2025-09-11 00:55:02.836050 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-11 00:55:02.836055 | orchestrator | Thursday 11 September 2025 00:53:09 +0000 (0:00:00.302) 0:08:34.300 **** 2025-09-11 00:55:02.836060 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.836064 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.836069 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.836074 | orchestrator | 2025-09-11 00:55:02.836079 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-11 00:55:02.836084 | orchestrator | Thursday 11 September 2025 00:53:09 +0000 (0:00:00.293) 0:08:34.593 **** 2025-09-11 00:55:02.836088 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.836093 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.836098 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.836102 | orchestrator | 2025-09-11 00:55:02.836107 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-11 00:55:02.836125 | orchestrator | Thursday 11 September 2025 00:53:10 +0000 (0:00:00.685) 0:08:35.279 **** 2025-09-11 00:55:02.836129 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.836134 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.836139 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.836144 | orchestrator | 2025-09-11 00:55:02.836149 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-11 00:55:02.836153 | orchestrator | Thursday 11 September 2025 00:53:10 +0000 (0:00:00.937) 0:08:36.216 **** 2025-09-11 00:55:02.836158 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.836163 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.836168 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.836172 | orchestrator | 2025-09-11 00:55:02.836177 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-11 00:55:02.836182 | orchestrator | Thursday 11 September 2025 00:53:11 +0000 (0:00:00.319) 0:08:36.536 **** 2025-09-11 00:55:02.836187 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.836191 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.836196 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.836201 | orchestrator | 2025-09-11 00:55:02.836209 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-11 00:55:02.836214 | orchestrator | Thursday 11 September 2025 00:53:11 +0000 (0:00:00.289) 0:08:36.825 **** 2025-09-11 00:55:02.836223 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.836228 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.836232 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.836237 | orchestrator | 2025-09-11 00:55:02.836242 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-11 00:55:02.836247 | orchestrator | Thursday 11 September 2025 00:53:11 +0000 (0:00:00.321) 0:08:37.147 **** 2025-09-11 00:55:02.836252 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.836256 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.836261 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.836266 | orchestrator | 2025-09-11 00:55:02.836271 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-11 00:55:02.836275 | orchestrator | Thursday 11 September 2025 00:53:12 +0000 (0:00:00.448) 0:08:37.596 **** 2025-09-11 00:55:02.836280 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.836285 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.836289 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.836294 | orchestrator | 2025-09-11 00:55:02.836299 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-11 00:55:02.836304 | orchestrator | Thursday 11 September 2025 00:53:12 +0000 (0:00:00.274) 0:08:37.871 **** 2025-09-11 00:55:02.836308 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.836313 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.836318 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.836323 | orchestrator | 2025-09-11 00:55:02.836328 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-11 00:55:02.836332 | orchestrator | Thursday 11 September 2025 00:53:12 +0000 (0:00:00.254) 0:08:38.125 **** 2025-09-11 00:55:02.836337 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.836342 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.836346 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.836351 | orchestrator | 2025-09-11 00:55:02.836356 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-11 00:55:02.836361 | orchestrator | Thursday 11 September 2025 00:53:13 +0000 (0:00:00.272) 0:08:38.398 **** 2025-09-11 00:55:02.836366 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.836370 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.836375 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.836380 | orchestrator | 2025-09-11 00:55:02.836385 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-11 00:55:02.836389 | orchestrator | Thursday 11 September 2025 00:53:13 +0000 (0:00:00.384) 0:08:38.782 **** 2025-09-11 00:55:02.836394 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.836399 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.836404 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.836408 | orchestrator | 2025-09-11 00:55:02.836413 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-11 00:55:02.836422 | orchestrator | Thursday 11 September 2025 00:53:13 +0000 (0:00:00.273) 0:08:39.055 **** 2025-09-11 00:55:02.836426 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.836431 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.836436 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.836441 | orchestrator | 2025-09-11 00:55:02.836445 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-09-11 00:55:02.836450 | orchestrator | Thursday 11 September 2025 00:53:14 +0000 (0:00:00.530) 0:08:39.586 **** 2025-09-11 00:55:02.836455 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.836460 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.836464 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-09-11 00:55:02.836469 | orchestrator | 2025-09-11 00:55:02.836474 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-09-11 00:55:02.836479 | orchestrator | Thursday 11 September 2025 00:53:14 +0000 (0:00:00.554) 0:08:40.141 **** 2025-09-11 00:55:02.836484 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-11 00:55:02.836495 | orchestrator | 2025-09-11 00:55:02.836503 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-09-11 00:55:02.836511 | orchestrator | Thursday 11 September 2025 00:53:17 +0000 (0:00:02.176) 0:08:42.317 **** 2025-09-11 00:55:02.836521 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-09-11 00:55:02.836531 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.836540 | orchestrator | 2025-09-11 00:55:02.836547 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-09-11 00:55:02.836555 | orchestrator | Thursday 11 September 2025 00:53:17 +0000 (0:00:00.193) 0:08:42.510 **** 2025-09-11 00:55:02.836564 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-11 00:55:02.836573 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-11 00:55:02.836581 | orchestrator | 2025-09-11 00:55:02.836589 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-09-11 00:55:02.836596 | orchestrator | Thursday 11 September 2025 00:53:25 +0000 (0:00:07.742) 0:08:50.253 **** 2025-09-11 00:55:02.836604 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-11 00:55:02.836612 | orchestrator | 2025-09-11 00:55:02.836624 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-09-11 00:55:02.836632 | orchestrator | Thursday 11 September 2025 00:53:28 +0000 (0:00:03.664) 0:08:53.918 **** 2025-09-11 00:55:02.836640 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:55:02.836648 | orchestrator | 2025-09-11 00:55:02.836656 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-09-11 00:55:02.836664 | orchestrator | Thursday 11 September 2025 00:53:29 +0000 (0:00:00.722) 0:08:54.641 **** 2025-09-11 00:55:02.836671 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-11 00:55:02.836679 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-11 00:55:02.836688 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-11 00:55:02.836696 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-09-11 00:55:02.836704 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-09-11 00:55:02.836711 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-09-11 00:55:02.836718 | orchestrator | 2025-09-11 00:55:02.836723 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-09-11 00:55:02.836728 | orchestrator | Thursday 11 September 2025 00:53:30 +0000 (0:00:01.314) 0:08:55.955 **** 2025-09-11 00:55:02.836733 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-11 00:55:02.836737 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-11 00:55:02.836742 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-11 00:55:02.836747 | orchestrator | 2025-09-11 00:55:02.836752 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-09-11 00:55:02.836756 | orchestrator | Thursday 11 September 2025 00:53:33 +0000 (0:00:02.533) 0:08:58.489 **** 2025-09-11 00:55:02.836761 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-11 00:55:02.836766 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-11 00:55:02.836771 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:55:02.836783 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-11 00:55:02.836788 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-11 00:55:02.836792 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:55:02.836797 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-11 00:55:02.836802 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-11 00:55:02.836807 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:55:02.836811 | orchestrator | 2025-09-11 00:55:02.836816 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-09-11 00:55:02.836824 | orchestrator | Thursday 11 September 2025 00:53:34 +0000 (0:00:01.280) 0:08:59.770 **** 2025-09-11 00:55:02.836829 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:55:02.836834 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:55:02.836839 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:55:02.836843 | orchestrator | 2025-09-11 00:55:02.836848 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-09-11 00:55:02.836853 | orchestrator | Thursday 11 September 2025 00:53:37 +0000 (0:00:02.919) 0:09:02.689 **** 2025-09-11 00:55:02.836857 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.836862 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.836867 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.836871 | orchestrator | 2025-09-11 00:55:02.836876 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-09-11 00:55:02.836881 | orchestrator | Thursday 11 September 2025 00:53:37 +0000 (0:00:00.300) 0:09:02.990 **** 2025-09-11 00:55:02.836886 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:55:02.836890 | orchestrator | 2025-09-11 00:55:02.836895 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-09-11 00:55:02.836900 | orchestrator | Thursday 11 September 2025 00:53:38 +0000 (0:00:00.530) 0:09:03.521 **** 2025-09-11 00:55:02.836904 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:55:02.836909 | orchestrator | 2025-09-11 00:55:02.836914 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-09-11 00:55:02.836919 | orchestrator | Thursday 11 September 2025 00:53:38 +0000 (0:00:00.706) 0:09:04.227 **** 2025-09-11 00:55:02.836924 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:55:02.836928 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:55:02.836933 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:55:02.836938 | orchestrator | 2025-09-11 00:55:02.836942 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-09-11 00:55:02.836947 | orchestrator | Thursday 11 September 2025 00:53:40 +0000 (0:00:01.292) 0:09:05.520 **** 2025-09-11 00:55:02.836952 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:55:02.836957 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:55:02.836961 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:55:02.836966 | orchestrator | 2025-09-11 00:55:02.836971 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-09-11 00:55:02.836975 | orchestrator | Thursday 11 September 2025 00:53:41 +0000 (0:00:01.154) 0:09:06.675 **** 2025-09-11 00:55:02.836980 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:55:02.836985 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:55:02.836990 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:55:02.836994 | orchestrator | 2025-09-11 00:55:02.836999 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-09-11 00:55:02.837004 | orchestrator | Thursday 11 September 2025 00:53:43 +0000 (0:00:01.838) 0:09:08.513 **** 2025-09-11 00:55:02.837009 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:55:02.837013 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:55:02.837018 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:55:02.837023 | orchestrator | 2025-09-11 00:55:02.837031 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-09-11 00:55:02.837041 | orchestrator | Thursday 11 September 2025 00:53:45 +0000 (0:00:02.270) 0:09:10.783 **** 2025-09-11 00:55:02.837045 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.837050 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.837055 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.837059 | orchestrator | 2025-09-11 00:55:02.837064 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-11 00:55:02.837069 | orchestrator | Thursday 11 September 2025 00:53:46 +0000 (0:00:01.179) 0:09:11.963 **** 2025-09-11 00:55:02.837074 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:55:02.837079 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:55:02.837083 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:55:02.837088 | orchestrator | 2025-09-11 00:55:02.837093 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-11 00:55:02.837097 | orchestrator | Thursday 11 September 2025 00:53:47 +0000 (0:00:00.918) 0:09:12.882 **** 2025-09-11 00:55:02.837102 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:55:02.837107 | orchestrator | 2025-09-11 00:55:02.837124 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-11 00:55:02.837129 | orchestrator | Thursday 11 September 2025 00:53:48 +0000 (0:00:00.506) 0:09:13.388 **** 2025-09-11 00:55:02.837133 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.837138 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.837143 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.837148 | orchestrator | 2025-09-11 00:55:02.837152 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-11 00:55:02.837157 | orchestrator | Thursday 11 September 2025 00:53:48 +0000 (0:00:00.288) 0:09:13.677 **** 2025-09-11 00:55:02.837162 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:55:02.837167 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:55:02.837171 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:55:02.837176 | orchestrator | 2025-09-11 00:55:02.837181 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-11 00:55:02.837185 | orchestrator | Thursday 11 September 2025 00:53:49 +0000 (0:00:01.484) 0:09:15.161 **** 2025-09-11 00:55:02.837190 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-11 00:55:02.837195 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-11 00:55:02.837200 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-11 00:55:02.837204 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.837209 | orchestrator | 2025-09-11 00:55:02.837214 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-11 00:55:02.837218 | orchestrator | Thursday 11 September 2025 00:53:50 +0000 (0:00:00.597) 0:09:15.759 **** 2025-09-11 00:55:02.837223 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.837228 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.837233 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.837237 | orchestrator | 2025-09-11 00:55:02.837245 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-11 00:55:02.837250 | orchestrator | 2025-09-11 00:55:02.837255 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-11 00:55:02.837260 | orchestrator | Thursday 11 September 2025 00:53:51 +0000 (0:00:00.539) 0:09:16.299 **** 2025-09-11 00:55:02.837264 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:55:02.837269 | orchestrator | 2025-09-11 00:55:02.837274 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-11 00:55:02.837278 | orchestrator | Thursday 11 September 2025 00:53:51 +0000 (0:00:00.698) 0:09:16.997 **** 2025-09-11 00:55:02.837283 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:55:02.837288 | orchestrator | 2025-09-11 00:55:02.837293 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-11 00:55:02.837301 | orchestrator | Thursday 11 September 2025 00:53:52 +0000 (0:00:00.509) 0:09:17.507 **** 2025-09-11 00:55:02.837306 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.837311 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.837316 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.837320 | orchestrator | 2025-09-11 00:55:02.837325 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-11 00:55:02.837330 | orchestrator | Thursday 11 September 2025 00:53:52 +0000 (0:00:00.479) 0:09:17.986 **** 2025-09-11 00:55:02.837335 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.837339 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.837344 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.837349 | orchestrator | 2025-09-11 00:55:02.837354 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-11 00:55:02.837358 | orchestrator | Thursday 11 September 2025 00:53:53 +0000 (0:00:00.719) 0:09:18.706 **** 2025-09-11 00:55:02.837363 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.837368 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.837373 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.837377 | orchestrator | 2025-09-11 00:55:02.837382 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-11 00:55:02.837387 | orchestrator | Thursday 11 September 2025 00:53:54 +0000 (0:00:00.690) 0:09:19.396 **** 2025-09-11 00:55:02.837392 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.837396 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.837401 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.837406 | orchestrator | 2025-09-11 00:55:02.837411 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-11 00:55:02.837415 | orchestrator | Thursday 11 September 2025 00:53:54 +0000 (0:00:00.708) 0:09:20.105 **** 2025-09-11 00:55:02.837420 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.837425 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.837430 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.837434 | orchestrator | 2025-09-11 00:55:02.837439 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-11 00:55:02.837446 | orchestrator | Thursday 11 September 2025 00:53:55 +0000 (0:00:00.551) 0:09:20.656 **** 2025-09-11 00:55:02.837451 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.837456 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.837461 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.837466 | orchestrator | 2025-09-11 00:55:02.837470 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-11 00:55:02.837475 | orchestrator | Thursday 11 September 2025 00:53:55 +0000 (0:00:00.290) 0:09:20.947 **** 2025-09-11 00:55:02.837480 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.837485 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.837489 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.837494 | orchestrator | 2025-09-11 00:55:02.837499 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-11 00:55:02.837504 | orchestrator | Thursday 11 September 2025 00:53:56 +0000 (0:00:00.298) 0:09:21.245 **** 2025-09-11 00:55:02.837508 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.837513 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.837518 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.837523 | orchestrator | 2025-09-11 00:55:02.837527 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-11 00:55:02.837534 | orchestrator | Thursday 11 September 2025 00:53:56 +0000 (0:00:00.692) 0:09:21.937 **** 2025-09-11 00:55:02.837543 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.837551 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.837559 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.837567 | orchestrator | 2025-09-11 00:55:02.837576 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-11 00:55:02.837584 | orchestrator | Thursday 11 September 2025 00:53:57 +0000 (0:00:00.950) 0:09:22.888 **** 2025-09-11 00:55:02.837599 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.837604 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.837608 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.837613 | orchestrator | 2025-09-11 00:55:02.837618 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-11 00:55:02.837623 | orchestrator | Thursday 11 September 2025 00:53:57 +0000 (0:00:00.313) 0:09:23.201 **** 2025-09-11 00:55:02.837627 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.837632 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.837637 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.837641 | orchestrator | 2025-09-11 00:55:02.837646 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-11 00:55:02.837651 | orchestrator | Thursday 11 September 2025 00:53:58 +0000 (0:00:00.297) 0:09:23.499 **** 2025-09-11 00:55:02.837656 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.837660 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.837665 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.837670 | orchestrator | 2025-09-11 00:55:02.837674 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-11 00:55:02.837679 | orchestrator | Thursday 11 September 2025 00:53:58 +0000 (0:00:00.326) 0:09:23.826 **** 2025-09-11 00:55:02.837684 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.837689 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.837697 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.837702 | orchestrator | 2025-09-11 00:55:02.837707 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-11 00:55:02.837712 | orchestrator | Thursday 11 September 2025 00:53:59 +0000 (0:00:00.535) 0:09:24.361 **** 2025-09-11 00:55:02.837717 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.837721 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.837726 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.837730 | orchestrator | 2025-09-11 00:55:02.837735 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-11 00:55:02.837740 | orchestrator | Thursday 11 September 2025 00:53:59 +0000 (0:00:00.331) 0:09:24.692 **** 2025-09-11 00:55:02.837745 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.837749 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.837754 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.837759 | orchestrator | 2025-09-11 00:55:02.837764 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-11 00:55:02.837768 | orchestrator | Thursday 11 September 2025 00:53:59 +0000 (0:00:00.293) 0:09:24.986 **** 2025-09-11 00:55:02.837773 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.837778 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.837782 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.837787 | orchestrator | 2025-09-11 00:55:02.837792 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-11 00:55:02.837797 | orchestrator | Thursday 11 September 2025 00:54:00 +0000 (0:00:00.303) 0:09:25.290 **** 2025-09-11 00:55:02.837801 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.837806 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.837811 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.837815 | orchestrator | 2025-09-11 00:55:02.837820 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-11 00:55:02.837825 | orchestrator | Thursday 11 September 2025 00:54:00 +0000 (0:00:00.552) 0:09:25.843 **** 2025-09-11 00:55:02.837829 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.837834 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.837839 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.837843 | orchestrator | 2025-09-11 00:55:02.837848 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-11 00:55:02.837853 | orchestrator | Thursday 11 September 2025 00:54:00 +0000 (0:00:00.335) 0:09:26.178 **** 2025-09-11 00:55:02.837858 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.837866 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.837871 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.837876 | orchestrator | 2025-09-11 00:55:02.837880 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-09-11 00:55:02.837885 | orchestrator | Thursday 11 September 2025 00:54:01 +0000 (0:00:00.536) 0:09:26.715 **** 2025-09-11 00:55:02.837890 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:55:02.837895 | orchestrator | 2025-09-11 00:55:02.837899 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-11 00:55:02.837904 | orchestrator | Thursday 11 September 2025 00:54:02 +0000 (0:00:00.710) 0:09:27.425 **** 2025-09-11 00:55:02.837912 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-11 00:55:02.837917 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-11 00:55:02.837922 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-11 00:55:02.837927 | orchestrator | 2025-09-11 00:55:02.837932 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-11 00:55:02.837936 | orchestrator | Thursday 11 September 2025 00:54:04 +0000 (0:00:02.143) 0:09:29.569 **** 2025-09-11 00:55:02.837941 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-11 00:55:02.837946 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-11 00:55:02.837951 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:55:02.837955 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-11 00:55:02.837960 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-11 00:55:02.837965 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:55:02.837970 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-11 00:55:02.837974 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-11 00:55:02.837979 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:55:02.837984 | orchestrator | 2025-09-11 00:55:02.837988 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-09-11 00:55:02.837993 | orchestrator | Thursday 11 September 2025 00:54:05 +0000 (0:00:01.206) 0:09:30.776 **** 2025-09-11 00:55:02.837998 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.838003 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.838007 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.838012 | orchestrator | 2025-09-11 00:55:02.838037 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-09-11 00:55:02.838042 | orchestrator | Thursday 11 September 2025 00:54:05 +0000 (0:00:00.306) 0:09:31.082 **** 2025-09-11 00:55:02.838046 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:55:02.838051 | orchestrator | 2025-09-11 00:55:02.838056 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-09-11 00:55:02.838061 | orchestrator | Thursday 11 September 2025 00:54:06 +0000 (0:00:00.821) 0:09:31.903 **** 2025-09-11 00:55:02.838066 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-11 00:55:02.838071 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-11 00:55:02.838079 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-11 00:55:02.838084 | orchestrator | 2025-09-11 00:55:02.838089 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-09-11 00:55:02.838094 | orchestrator | Thursday 11 September 2025 00:54:07 +0000 (0:00:00.829) 0:09:32.733 **** 2025-09-11 00:55:02.838098 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-11 00:55:02.838107 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-11 00:55:02.838140 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-11 00:55:02.838145 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-11 00:55:02.838149 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-11 00:55:02.838154 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-11 00:55:02.838159 | orchestrator | 2025-09-11 00:55:02.838164 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-11 00:55:02.838169 | orchestrator | Thursday 11 September 2025 00:54:11 +0000 (0:00:04.302) 0:09:37.035 **** 2025-09-11 00:55:02.838173 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-11 00:55:02.838178 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-11 00:55:02.838183 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-11 00:55:02.838188 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-11 00:55:02.838193 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-11 00:55:02.838198 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-11 00:55:02.838202 | orchestrator | 2025-09-11 00:55:02.838207 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-11 00:55:02.838212 | orchestrator | Thursday 11 September 2025 00:54:14 +0000 (0:00:02.942) 0:09:39.977 **** 2025-09-11 00:55:02.838217 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-11 00:55:02.838221 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-11 00:55:02.838226 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:55:02.838231 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:55:02.838236 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-11 00:55:02.838241 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:55:02.838245 | orchestrator | 2025-09-11 00:55:02.838250 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-09-11 00:55:02.838255 | orchestrator | Thursday 11 September 2025 00:54:15 +0000 (0:00:01.253) 0:09:41.231 **** 2025-09-11 00:55:02.838263 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-09-11 00:55:02.838268 | orchestrator | 2025-09-11 00:55:02.838273 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-09-11 00:55:02.838278 | orchestrator | Thursday 11 September 2025 00:54:16 +0000 (0:00:00.221) 0:09:41.452 **** 2025-09-11 00:55:02.838283 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-11 00:55:02.838288 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-11 00:55:02.838293 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-11 00:55:02.838297 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-11 00:55:02.838302 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-11 00:55:02.838307 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.838312 | orchestrator | 2025-09-11 00:55:02.838317 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-09-11 00:55:02.838321 | orchestrator | Thursday 11 September 2025 00:54:16 +0000 (0:00:00.577) 0:09:42.029 **** 2025-09-11 00:55:02.838326 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-11 00:55:02.838335 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-11 00:55:02.838340 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-11 00:55:02.838345 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-11 00:55:02.838350 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-11 00:55:02.838355 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.838359 | orchestrator | 2025-09-11 00:55:02.838364 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-09-11 00:55:02.838369 | orchestrator | Thursday 11 September 2025 00:54:17 +0000 (0:00:00.573) 0:09:42.602 **** 2025-09-11 00:55:02.838377 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-11 00:55:02.838382 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-11 00:55:02.838387 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-11 00:55:02.838391 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-11 00:55:02.838396 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-11 00:55:02.838401 | orchestrator | 2025-09-11 00:55:02.838406 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-09-11 00:55:02.838411 | orchestrator | Thursday 11 September 2025 00:54:48 +0000 (0:00:30.656) 0:10:13.259 **** 2025-09-11 00:55:02.838416 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.838420 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.838425 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.838430 | orchestrator | 2025-09-11 00:55:02.838435 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-09-11 00:55:02.838440 | orchestrator | Thursday 11 September 2025 00:54:48 +0000 (0:00:00.291) 0:10:13.550 **** 2025-09-11 00:55:02.838444 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.838449 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.838454 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.838459 | orchestrator | 2025-09-11 00:55:02.838463 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-09-11 00:55:02.838468 | orchestrator | Thursday 11 September 2025 00:54:48 +0000 (0:00:00.550) 0:10:14.100 **** 2025-09-11 00:55:02.838473 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:55:02.838478 | orchestrator | 2025-09-11 00:55:02.838483 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-09-11 00:55:02.838488 | orchestrator | Thursday 11 September 2025 00:54:49 +0000 (0:00:00.532) 0:10:14.633 **** 2025-09-11 00:55:02.838492 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:55:02.838497 | orchestrator | 2025-09-11 00:55:02.838502 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-09-11 00:55:02.838507 | orchestrator | Thursday 11 September 2025 00:54:50 +0000 (0:00:00.748) 0:10:15.381 **** 2025-09-11 00:55:02.838514 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:55:02.838523 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:55:02.838527 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:55:02.838532 | orchestrator | 2025-09-11 00:55:02.838537 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-09-11 00:55:02.838542 | orchestrator | Thursday 11 September 2025 00:54:51 +0000 (0:00:01.234) 0:10:16.616 **** 2025-09-11 00:55:02.838547 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:55:02.838551 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:55:02.838556 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:55:02.838561 | orchestrator | 2025-09-11 00:55:02.838566 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-09-11 00:55:02.838570 | orchestrator | Thursday 11 September 2025 00:54:52 +0000 (0:00:01.317) 0:10:17.933 **** 2025-09-11 00:55:02.838575 | orchestrator | changed: [testbed-node-3] 2025-09-11 00:55:02.838580 | orchestrator | changed: [testbed-node-4] 2025-09-11 00:55:02.838585 | orchestrator | changed: [testbed-node-5] 2025-09-11 00:55:02.838590 | orchestrator | 2025-09-11 00:55:02.838594 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-09-11 00:55:02.838599 | orchestrator | Thursday 11 September 2025 00:54:54 +0000 (0:00:01.768) 0:10:19.702 **** 2025-09-11 00:55:02.838604 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-11 00:55:02.838609 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-11 00:55:02.838614 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-11 00:55:02.838618 | orchestrator | 2025-09-11 00:55:02.838623 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-11 00:55:02.838628 | orchestrator | Thursday 11 September 2025 00:54:57 +0000 (0:00:02.609) 0:10:22.311 **** 2025-09-11 00:55:02.838633 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.838638 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.838642 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.838647 | orchestrator | 2025-09-11 00:55:02.838652 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-11 00:55:02.838657 | orchestrator | Thursday 11 September 2025 00:54:57 +0000 (0:00:00.310) 0:10:22.621 **** 2025-09-11 00:55:02.838661 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:55:02.838666 | orchestrator | 2025-09-11 00:55:02.838671 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-11 00:55:02.838676 | orchestrator | Thursday 11 September 2025 00:54:58 +0000 (0:00:00.792) 0:10:23.414 **** 2025-09-11 00:55:02.838684 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.838688 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.838693 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.838697 | orchestrator | 2025-09-11 00:55:02.838702 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-11 00:55:02.838707 | orchestrator | Thursday 11 September 2025 00:54:58 +0000 (0:00:00.306) 0:10:23.720 **** 2025-09-11 00:55:02.838711 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.838716 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:55:02.838720 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:55:02.838725 | orchestrator | 2025-09-11 00:55:02.838729 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-11 00:55:02.838734 | orchestrator | Thursday 11 September 2025 00:54:58 +0000 (0:00:00.305) 0:10:24.026 **** 2025-09-11 00:55:02.838738 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-11 00:55:02.838743 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-11 00:55:02.838747 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-11 00:55:02.838752 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:55:02.838760 | orchestrator | 2025-09-11 00:55:02.838765 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-11 00:55:02.838769 | orchestrator | Thursday 11 September 2025 00:54:59 +0000 (0:00:01.171) 0:10:25.198 **** 2025-09-11 00:55:02.838774 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:55:02.838778 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:55:02.838783 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:55:02.838787 | orchestrator | 2025-09-11 00:55:02.838792 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:55:02.838796 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2025-09-11 00:55:02.838801 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-09-11 00:55:02.838806 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-09-11 00:55:02.838811 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2025-09-11 00:55:02.838815 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-09-11 00:55:02.838820 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-09-11 00:55:02.838824 | orchestrator | 2025-09-11 00:55:02.838829 | orchestrator | 2025-09-11 00:55:02.838833 | orchestrator | 2025-09-11 00:55:02.838840 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:55:02.838845 | orchestrator | Thursday 11 September 2025 00:55:00 +0000 (0:00:00.319) 0:10:25.517 **** 2025-09-11 00:55:02.838849 | orchestrator | =============================================================================== 2025-09-11 00:55:02.838854 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 65.98s 2025-09-11 00:55:02.838859 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 38.34s 2025-09-11 00:55:02.838863 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.82s 2025-09-11 00:55:02.838868 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.66s 2025-09-11 00:55:02.838872 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.23s 2025-09-11 00:55:02.838877 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.59s 2025-09-11 00:55:02.838881 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 11.25s 2025-09-11 00:55:02.838886 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.69s 2025-09-11 00:55:02.838890 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.74s 2025-09-11 00:55:02.838895 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.10s 2025-09-11 00:55:02.838899 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.44s 2025-09-11 00:55:02.838904 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.00s 2025-09-11 00:55:02.838908 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.30s 2025-09-11 00:55:02.838913 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.16s 2025-09-11 00:55:02.838917 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.76s 2025-09-11 00:55:02.838922 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.66s 2025-09-11 00:55:02.838926 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.60s 2025-09-11 00:55:02.838931 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.45s 2025-09-11 00:55:02.838939 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.09s 2025-09-11 00:55:02.838944 | orchestrator | ceph-osd : Unset noup flag ---------------------------------------------- 3.06s 2025-09-11 00:55:02.838948 | orchestrator | 2025-09-11 00:55:02 | INFO  | Task da118b71-d11a-4b31-94de-5301016dc31d is in state SUCCESS 2025-09-11 00:55:02.838955 | orchestrator | 2025-09-11 00:55:02 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:55:02.838960 | orchestrator | 2025-09-11 00:55:02 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:55:02.838965 | orchestrator | 2025-09-11 00:55:02 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:55:02.838969 | orchestrator | 2025-09-11 00:55:02 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:55:05.864506 | orchestrator | 2025-09-11 00:55:05 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:55:05.866161 | orchestrator | 2025-09-11 00:55:05 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:55:05.868016 | orchestrator | 2025-09-11 00:55:05 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:55:05.868050 | orchestrator | 2025-09-11 00:55:05 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:55:08.896828 | orchestrator | 2025-09-11 00:55:08 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:55:08.898134 | orchestrator | 2025-09-11 00:55:08 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:55:08.899483 | orchestrator | 2025-09-11 00:55:08 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:55:08.899764 | orchestrator | 2025-09-11 00:55:08 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:55:11.942278 | orchestrator | 2025-09-11 00:55:11 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:55:11.944195 | orchestrator | 2025-09-11 00:55:11 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:55:11.945653 | orchestrator | 2025-09-11 00:55:11 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:55:11.946095 | orchestrator | 2025-09-11 00:55:11 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:55:14.991607 | orchestrator | 2025-09-11 00:55:14 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:55:14.993613 | orchestrator | 2025-09-11 00:55:14 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:55:14.995177 | orchestrator | 2025-09-11 00:55:14 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:55:14.995205 | orchestrator | 2025-09-11 00:55:14 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:55:18.032618 | orchestrator | 2025-09-11 00:55:18 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:55:18.034002 | orchestrator | 2025-09-11 00:55:18 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:55:18.035393 | orchestrator | 2025-09-11 00:55:18 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:55:18.035421 | orchestrator | 2025-09-11 00:55:18 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:55:21.086938 | orchestrator | 2025-09-11 00:55:21 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:55:21.088294 | orchestrator | 2025-09-11 00:55:21 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:55:21.090244 | orchestrator | 2025-09-11 00:55:21 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:55:21.090335 | orchestrator | 2025-09-11 00:55:21 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:55:24.129778 | orchestrator | 2025-09-11 00:55:24 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:55:24.131354 | orchestrator | 2025-09-11 00:55:24 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:55:24.132942 | orchestrator | 2025-09-11 00:55:24 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:55:24.132977 | orchestrator | 2025-09-11 00:55:24 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:55:27.171484 | orchestrator | 2025-09-11 00:55:27 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:55:27.172143 | orchestrator | 2025-09-11 00:55:27 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:55:27.173422 | orchestrator | 2025-09-11 00:55:27 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:55:27.173462 | orchestrator | 2025-09-11 00:55:27 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:55:30.224917 | orchestrator | 2025-09-11 00:55:30 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:55:30.227334 | orchestrator | 2025-09-11 00:55:30 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:55:30.228571 | orchestrator | 2025-09-11 00:55:30 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:55:30.228604 | orchestrator | 2025-09-11 00:55:30 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:55:33.283714 | orchestrator | 2025-09-11 00:55:33 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:55:33.284842 | orchestrator | 2025-09-11 00:55:33 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:55:33.287348 | orchestrator | 2025-09-11 00:55:33 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:55:33.287601 | orchestrator | 2025-09-11 00:55:33 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:55:36.328801 | orchestrator | 2025-09-11 00:55:36 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:55:36.330222 | orchestrator | 2025-09-11 00:55:36 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:55:36.331904 | orchestrator | 2025-09-11 00:55:36 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:55:36.331930 | orchestrator | 2025-09-11 00:55:36 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:55:39.378385 | orchestrator | 2025-09-11 00:55:39 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:55:39.378478 | orchestrator | 2025-09-11 00:55:39 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:55:39.379647 | orchestrator | 2025-09-11 00:55:39 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:55:39.379670 | orchestrator | 2025-09-11 00:55:39 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:55:42.421334 | orchestrator | 2025-09-11 00:55:42 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:55:42.422824 | orchestrator | 2025-09-11 00:55:42 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state STARTED 2025-09-11 00:55:42.424558 | orchestrator | 2025-09-11 00:55:42 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:55:42.424681 | orchestrator | 2025-09-11 00:55:42 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:55:45.470613 | orchestrator | 2025-09-11 00:55:45 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:55:45.472348 | orchestrator | 2025-09-11 00:55:45 | INFO  | Task 4bfdb58f-c298-4587-9363-889fea0160af is in state SUCCESS 2025-09-11 00:55:45.474076 | orchestrator | 2025-09-11 00:55:45.474139 | orchestrator | 2025-09-11 00:55:45.474153 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-11 00:55:45.474164 | orchestrator | 2025-09-11 00:55:45.474175 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-11 00:55:45.474186 | orchestrator | Thursday 11 September 2025 00:53:00 +0000 (0:00:00.255) 0:00:00.255 **** 2025-09-11 00:55:45.474197 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:45.474210 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:55:45.474220 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:55:45.474231 | orchestrator | 2025-09-11 00:55:45.474242 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-11 00:55:45.474253 | orchestrator | Thursday 11 September 2025 00:53:00 +0000 (0:00:00.262) 0:00:00.518 **** 2025-09-11 00:55:45.474428 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-09-11 00:55:45.474447 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-09-11 00:55:45.474458 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-09-11 00:55:45.474469 | orchestrator | 2025-09-11 00:55:45.474480 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-09-11 00:55:45.474491 | orchestrator | 2025-09-11 00:55:45.474502 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-11 00:55:45.474513 | orchestrator | Thursday 11 September 2025 00:53:00 +0000 (0:00:00.335) 0:00:00.854 **** 2025-09-11 00:55:45.474524 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:55:45.474535 | orchestrator | 2025-09-11 00:55:45.474546 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-09-11 00:55:45.474557 | orchestrator | Thursday 11 September 2025 00:53:01 +0000 (0:00:00.442) 0:00:01.296 **** 2025-09-11 00:55:45.474569 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-11 00:55:45.474580 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-11 00:55:45.474591 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-11 00:55:45.474603 | orchestrator | 2025-09-11 00:55:45.474614 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-09-11 00:55:45.474641 | orchestrator | Thursday 11 September 2025 00:53:01 +0000 (0:00:00.677) 0:00:01.973 **** 2025-09-11 00:55:45.474656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-11 00:55:45.474672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-11 00:55:45.474723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-11 00:55:45.474740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-11 00:55:45.474760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-11 00:55:45.474774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-11 00:55:45.474794 | orchestrator | 2025-09-11 00:55:45.474806 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-11 00:55:45.474817 | orchestrator | Thursday 11 September 2025 00:53:03 +0000 (0:00:01.575) 0:00:03.549 **** 2025-09-11 00:55:45.474828 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:55:45.474839 | orchestrator | 2025-09-11 00:55:45.474851 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-09-11 00:55:45.474863 | orchestrator | Thursday 11 September 2025 00:53:03 +0000 (0:00:00.430) 0:00:03.980 **** 2025-09-11 00:55:45.474884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-11 00:55:45.474897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-11 00:55:45.474914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-11 00:55:45.474927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-11 00:55:45.474953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-11 00:55:45.474967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-11 00:55:45.474980 | orchestrator | 2025-09-11 00:55:45.474991 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-09-11 00:55:45.475003 | orchestrator | Thursday 11 September 2025 00:53:06 +0000 (0:00:02.792) 0:00:06.773 **** 2025-09-11 00:55:45.475020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-11 00:55:45.475039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-11 00:55:45.475052 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:45.475064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-11 00:55:45.475086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-11 00:55:45.475104 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:45.475141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-11 00:55:45.475196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-11 00:55:45.475211 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:45.475224 | orchestrator | 2025-09-11 00:55:45.475237 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-09-11 00:55:45.475250 | orchestrator | Thursday 11 September 2025 00:53:07 +0000 (0:00:00.808) 0:00:07.581 **** 2025-09-11 00:55:45.475263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-11 00:55:45.475285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-11 00:55:45.475299 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:45.475317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-11 00:55:45.475339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-11 00:55:45.475353 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:45.475366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-11 00:55:45.475388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-11 00:55:45.475403 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:45.475416 | orchestrator | 2025-09-11 00:55:45.475428 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-09-11 00:55:45.475439 | orchestrator | Thursday 11 September 2025 00:53:08 +0000 (0:00:00.884) 0:00:08.466 **** 2025-09-11 00:55:45.475456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-11 00:55:45.475475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-11 00:55:45.475487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-11 00:55:45.475506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-11 00:55:45.475519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-11 00:55:45.475549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-11 00:55:45.475562 | orchestrator | 2025-09-11 00:55:45.475574 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-09-11 00:55:45.475585 | orchestrator | Thursday 11 September 2025 00:53:10 +0000 (0:00:02.190) 0:00:10.657 **** 2025-09-11 00:55:45.475596 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:55:45.475607 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:55:45.475618 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:55:45.475629 | orchestrator | 2025-09-11 00:55:45.475640 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-09-11 00:55:45.475651 | orchestrator | Thursday 11 September 2025 00:53:13 +0000 (0:00:02.710) 0:00:13.368 **** 2025-09-11 00:55:45.475661 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:55:45.475672 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:55:45.475683 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:55:45.475693 | orchestrator | 2025-09-11 00:55:45.475704 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-09-11 00:55:45.475715 | orchestrator | Thursday 11 September 2025 00:53:15 +0000 (0:00:01.996) 0:00:15.364 **** 2025-09-11 00:55:45.475726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-11 00:55:45.475745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-11 00:55:45.475765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-11 00:55:45.475782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-11 00:55:45.475796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-11 00:55:45.475816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-11 00:55:45.475835 | orchestrator | 2025-09-11 00:55:45.475847 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-11 00:55:45.475858 | orchestrator | Thursday 11 September 2025 00:53:17 +0000 (0:00:01.831) 0:00:17.195 **** 2025-09-11 00:55:45.475870 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:45.475880 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:55:45.475891 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:55:45.475901 | orchestrator | 2025-09-11 00:55:45.475912 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-11 00:55:45.475923 | orchestrator | Thursday 11 September 2025 00:53:17 +0000 (0:00:00.252) 0:00:17.448 **** 2025-09-11 00:55:45.475933 | orchestrator | 2025-09-11 00:55:45.475944 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-11 00:55:45.475955 | orchestrator | Thursday 11 September 2025 00:53:17 +0000 (0:00:00.056) 0:00:17.505 **** 2025-09-11 00:55:45.475965 | orchestrator | 2025-09-11 00:55:45.475976 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-11 00:55:45.475986 | orchestrator | Thursday 11 September 2025 00:53:17 +0000 (0:00:00.060) 0:00:17.565 **** 2025-09-11 00:55:45.475997 | orchestrator | 2025-09-11 00:55:45.476007 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-09-11 00:55:45.476018 | orchestrator | Thursday 11 September 2025 00:53:17 +0000 (0:00:00.064) 0:00:17.630 **** 2025-09-11 00:55:45.476028 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:45.476039 | orchestrator | 2025-09-11 00:55:45.476054 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-09-11 00:55:45.476065 | orchestrator | Thursday 11 September 2025 00:53:17 +0000 (0:00:00.200) 0:00:17.831 **** 2025-09-11 00:55:45.476076 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:55:45.476086 | orchestrator | 2025-09-11 00:55:45.476097 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-09-11 00:55:45.476108 | orchestrator | Thursday 11 September 2025 00:53:18 +0000 (0:00:00.494) 0:00:18.325 **** 2025-09-11 00:55:45.476149 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:55:45.476160 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:55:45.476171 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:55:45.476182 | orchestrator | 2025-09-11 00:55:45.476193 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-09-11 00:55:45.476203 | orchestrator | Thursday 11 September 2025 00:54:15 +0000 (0:00:57.286) 0:01:15.612 **** 2025-09-11 00:55:45.476214 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:55:45.476225 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:55:45.476235 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:55:45.476246 | orchestrator | 2025-09-11 00:55:45.476256 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-11 00:55:45.476267 | orchestrator | Thursday 11 September 2025 00:55:33 +0000 (0:01:18.116) 0:02:33.728 **** 2025-09-11 00:55:45.476278 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:55:45.476288 | orchestrator | 2025-09-11 00:55:45.476299 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-09-11 00:55:45.476310 | orchestrator | Thursday 11 September 2025 00:55:34 +0000 (0:00:00.497) 0:02:34.226 **** 2025-09-11 00:55:45.476320 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:45.476331 | orchestrator | 2025-09-11 00:55:45.476342 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-09-11 00:55:45.476352 | orchestrator | Thursday 11 September 2025 00:55:36 +0000 (0:00:02.733) 0:02:36.959 **** 2025-09-11 00:55:45.476363 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:55:45.476373 | orchestrator | 2025-09-11 00:55:45.476384 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-09-11 00:55:45.476394 | orchestrator | Thursday 11 September 2025 00:55:39 +0000 (0:00:02.254) 0:02:39.213 **** 2025-09-11 00:55:45.476405 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:55:45.476424 | orchestrator | 2025-09-11 00:55:45.476435 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-09-11 00:55:45.476445 | orchestrator | Thursday 11 September 2025 00:55:41 +0000 (0:00:02.740) 0:02:41.954 **** 2025-09-11 00:55:45.476456 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:55:45.476466 | orchestrator | 2025-09-11 00:55:45.476477 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:55:45.476489 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-11 00:55:45.476501 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-11 00:55:45.476511 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-11 00:55:45.476522 | orchestrator | 2025-09-11 00:55:45.476532 | orchestrator | 2025-09-11 00:55:45.476543 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:55:45.476560 | orchestrator | Thursday 11 September 2025 00:55:44 +0000 (0:00:02.573) 0:02:44.528 **** 2025-09-11 00:55:45.476571 | orchestrator | =============================================================================== 2025-09-11 00:55:45.476582 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 78.12s 2025-09-11 00:55:45.476592 | orchestrator | opensearch : Restart opensearch container ------------------------------ 57.29s 2025-09-11 00:55:45.476603 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.79s 2025-09-11 00:55:45.476614 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.74s 2025-09-11 00:55:45.476624 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.73s 2025-09-11 00:55:45.476635 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.71s 2025-09-11 00:55:45.476645 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.57s 2025-09-11 00:55:45.476656 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.25s 2025-09-11 00:55:45.476666 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.19s 2025-09-11 00:55:45.476677 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.00s 2025-09-11 00:55:45.476687 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.83s 2025-09-11 00:55:45.476698 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.58s 2025-09-11 00:55:45.476709 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.88s 2025-09-11 00:55:45.476719 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.81s 2025-09-11 00:55:45.476730 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.68s 2025-09-11 00:55:45.476740 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.50s 2025-09-11 00:55:45.476751 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.49s 2025-09-11 00:55:45.476761 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.44s 2025-09-11 00:55:45.476777 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.43s 2025-09-11 00:55:45.476788 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.34s 2025-09-11 00:55:45.476799 | orchestrator | 2025-09-11 00:55:45 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:55:45.476810 | orchestrator | 2025-09-11 00:55:45 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:55:48.519987 | orchestrator | 2025-09-11 00:55:48 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:55:48.522339 | orchestrator | 2025-09-11 00:55:48 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:55:48.522790 | orchestrator | 2025-09-11 00:55:48 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:55:51.561828 | orchestrator | 2025-09-11 00:55:51 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:55:51.563233 | orchestrator | 2025-09-11 00:55:51 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:55:51.563266 | orchestrator | 2025-09-11 00:55:51 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:55:54.606682 | orchestrator | 2025-09-11 00:55:54 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:55:54.607023 | orchestrator | 2025-09-11 00:55:54 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:55:54.607173 | orchestrator | 2025-09-11 00:55:54 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:55:57.647403 | orchestrator | 2025-09-11 00:55:57 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:55:57.648670 | orchestrator | 2025-09-11 00:55:57 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:55:57.648834 | orchestrator | 2025-09-11 00:55:57 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:56:00.693692 | orchestrator | 2025-09-11 00:56:00 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state STARTED 2025-09-11 00:56:00.695845 | orchestrator | 2025-09-11 00:56:00 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:56:00.695911 | orchestrator | 2025-09-11 00:56:00 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:56:03.743588 | orchestrator | 2025-09-11 00:56:03 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:56:03.745446 | orchestrator | 2025-09-11 00:56:03 | INFO  | Task 91909f75-9667-481a-be23-b4cd6a469935 is in state STARTED 2025-09-11 00:56:03.753208 | orchestrator | 2025-09-11 00:56:03 | INFO  | Task 6f5ed154-bc65-45be-abbc-4320dc598ca0 is in state SUCCESS 2025-09-11 00:56:03.755452 | orchestrator | 2025-09-11 00:56:03.755493 | orchestrator | 2025-09-11 00:56:03.755505 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-09-11 00:56:03.755517 | orchestrator | 2025-09-11 00:56:03.755528 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-11 00:56:03.755540 | orchestrator | Thursday 11 September 2025 00:53:00 +0000 (0:00:00.082) 0:00:00.082 **** 2025-09-11 00:56:03.755552 | orchestrator | ok: [localhost] => { 2025-09-11 00:56:03.755564 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-09-11 00:56:03.755576 | orchestrator | } 2025-09-11 00:56:03.755588 | orchestrator | 2025-09-11 00:56:03.755599 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-09-11 00:56:03.755610 | orchestrator | Thursday 11 September 2025 00:53:00 +0000 (0:00:00.036) 0:00:00.118 **** 2025-09-11 00:56:03.755621 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-09-11 00:56:03.755633 | orchestrator | ...ignoring 2025-09-11 00:56:03.755645 | orchestrator | 2025-09-11 00:56:03.755657 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-09-11 00:56:03.755667 | orchestrator | Thursday 11 September 2025 00:53:02 +0000 (0:00:02.722) 0:00:02.841 **** 2025-09-11 00:56:03.755678 | orchestrator | skipping: [localhost] 2025-09-11 00:56:03.755688 | orchestrator | 2025-09-11 00:56:03.755699 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-09-11 00:56:03.755709 | orchestrator | Thursday 11 September 2025 00:53:02 +0000 (0:00:00.055) 0:00:02.896 **** 2025-09-11 00:56:03.755748 | orchestrator | ok: [localhost] 2025-09-11 00:56:03.755759 | orchestrator | 2025-09-11 00:56:03.755769 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-11 00:56:03.755780 | orchestrator | 2025-09-11 00:56:03.755791 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-11 00:56:03.755801 | orchestrator | Thursday 11 September 2025 00:53:02 +0000 (0:00:00.131) 0:00:03.028 **** 2025-09-11 00:56:03.755812 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:56:03.755823 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:56:03.755834 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:56:03.755844 | orchestrator | 2025-09-11 00:56:03.755855 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-11 00:56:03.755866 | orchestrator | Thursday 11 September 2025 00:53:03 +0000 (0:00:00.269) 0:00:03.297 **** 2025-09-11 00:56:03.755890 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-09-11 00:56:03.755901 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-09-11 00:56:03.755913 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-09-11 00:56:03.755923 | orchestrator | 2025-09-11 00:56:03.755934 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-09-11 00:56:03.755944 | orchestrator | 2025-09-11 00:56:03.755955 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-09-11 00:56:03.755966 | orchestrator | Thursday 11 September 2025 00:53:03 +0000 (0:00:00.455) 0:00:03.753 **** 2025-09-11 00:56:03.755976 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-11 00:56:03.755987 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-11 00:56:03.755998 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-11 00:56:03.756008 | orchestrator | 2025-09-11 00:56:03.756019 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-11 00:56:03.756030 | orchestrator | Thursday 11 September 2025 00:53:04 +0000 (0:00:00.342) 0:00:04.096 **** 2025-09-11 00:56:03.756040 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:56:03.756052 | orchestrator | 2025-09-11 00:56:03.756065 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-09-11 00:56:03.756078 | orchestrator | Thursday 11 September 2025 00:53:04 +0000 (0:00:00.601) 0:00:04.697 **** 2025-09-11 00:56:03.756114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-11 00:56:03.756171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-11 00:56:03.756185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-11 00:56:03.756198 | orchestrator | 2025-09-11 00:56:03.756216 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-09-11 00:56:03.756228 | orchestrator | Thursday 11 September 2025 00:53:07 +0000 (0:00:02.868) 0:00:07.566 **** 2025-09-11 00:56:03.756252 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:56:03.756264 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:56:03.756275 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:56:03.756286 | orchestrator | 2025-09-11 00:56:03.756296 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-09-11 00:56:03.756307 | orchestrator | Thursday 11 September 2025 00:53:08 +0000 (0:00:00.631) 0:00:08.197 **** 2025-09-11 00:56:03.756318 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:56:03.756329 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:56:03.756339 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:56:03.756350 | orchestrator | 2025-09-11 00:56:03.756361 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-09-11 00:56:03.756372 | orchestrator | Thursday 11 September 2025 00:53:09 +0000 (0:00:01.295) 0:00:09.492 **** 2025-09-11 00:56:03.756389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-11 00:56:03.756408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-11 00:56:03.756437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-11 00:56:03.756450 | orchestrator | 2025-09-11 00:56:03.756461 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-09-11 00:56:03.756472 | orchestrator | Thursday 11 September 2025 00:53:12 +0000 (0:00:03.342) 0:00:12.835 **** 2025-09-11 00:56:03.756483 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:56:03.756494 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:56:03.756504 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:56:03.756515 | orchestrator | 2025-09-11 00:56:03.756526 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-09-11 00:56:03.756536 | orchestrator | Thursday 11 September 2025 00:53:13 +0000 (0:00:01.090) 0:00:13.925 **** 2025-09-11 00:56:03.756547 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:56:03.756558 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:56:03.756569 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:56:03.756579 | orchestrator | 2025-09-11 00:56:03.756590 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-11 00:56:03.756601 | orchestrator | Thursday 11 September 2025 00:53:17 +0000 (0:00:03.954) 0:00:17.880 **** 2025-09-11 00:56:03.756611 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:56:03.756622 | orchestrator | 2025-09-11 00:56:03.756633 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-11 00:56:03.756643 | orchestrator | Thursday 11 September 2025 00:53:18 +0000 (0:00:00.442) 0:00:18.322 **** 2025-09-11 00:56:03.756663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-11 00:56:03.756683 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:56:03.756700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-11 00:56:03.756712 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:56:03.756731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-11 00:56:03.756750 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:56:03.756761 | orchestrator | 2025-09-11 00:56:03.756772 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-11 00:56:03.756783 | orchestrator | Thursday 11 September 2025 00:53:20 +0000 (0:00:02.341) 0:00:20.663 **** 2025-09-11 00:56:03.756799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-11 00:56:03.756811 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:56:03.756829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-11 00:56:03.756849 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:56:03.756865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-11 00:56:03.756878 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:56:03.756888 | orchestrator | 2025-09-11 00:56:03.756899 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-11 00:56:03.756910 | orchestrator | Thursday 11 September 2025 00:53:22 +0000 (0:00:01.922) 0:00:22.586 **** 2025-09-11 00:56:03.756921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-11 00:56:03.756945 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:56:03.756970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-11 00:56:03.756982 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:56:03.756994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-11 00:56:03.757012 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:56:03.757023 | orchestrator | 2025-09-11 00:56:03.757034 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-09-11 00:56:03.757045 | orchestrator | Thursday 11 September 2025 00:53:25 +0000 (0:00:02.630) 0:00:25.216 **** 2025-09-11 00:56:03.757065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-11 00:56:03.757083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-11 00:56:03.757111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-11 00:56:03.757160 | orchestrator | 2025-09-11 00:56:03.757172 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-09-11 00:56:03.757183 | orchestrator | Thursday 11 September 2025 00:53:27 +0000 (0:00:02.727) 0:00:27.943 **** 2025-09-11 00:56:03.757194 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:56:03.757204 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:56:03.757215 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:56:03.757226 | orchestrator | 2025-09-11 00:56:03.757237 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-09-11 00:56:03.757247 | orchestrator | Thursday 11 September 2025 00:53:28 +0000 (0:00:00.863) 0:00:28.807 **** 2025-09-11 00:56:03.757258 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:56:03.757269 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:56:03.757280 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:56:03.757291 | orchestrator | 2025-09-11 00:56:03.757301 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-09-11 00:56:03.757318 | orchestrator | Thursday 11 September 2025 00:53:29 +0000 (0:00:00.788) 0:00:29.595 **** 2025-09-11 00:56:03.757330 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:56:03.757340 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:56:03.757359 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:56:03.757370 | orchestrator | 2025-09-11 00:56:03.757380 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-09-11 00:56:03.757391 | orchestrator | Thursday 11 September 2025 00:53:30 +0000 (0:00:00.472) 0:00:30.067 **** 2025-09-11 00:56:03.757404 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-09-11 00:56:03.757415 | orchestrator | ...ignoring 2025-09-11 00:56:03.757427 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-09-11 00:56:03.757438 | orchestrator | ...ignoring 2025-09-11 00:56:03.757449 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-09-11 00:56:03.757460 | orchestrator | ...ignoring 2025-09-11 00:56:03.757471 | orchestrator | 2025-09-11 00:56:03.757482 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-09-11 00:56:03.757493 | orchestrator | Thursday 11 September 2025 00:53:40 +0000 (0:00:10.955) 0:00:41.023 **** 2025-09-11 00:56:03.757504 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:56:03.757515 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:56:03.757526 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:56:03.757536 | orchestrator | 2025-09-11 00:56:03.757547 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-09-11 00:56:03.757558 | orchestrator | Thursday 11 September 2025 00:53:41 +0000 (0:00:00.387) 0:00:41.411 **** 2025-09-11 00:56:03.757569 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:56:03.757580 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:56:03.757591 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:56:03.757601 | orchestrator | 2025-09-11 00:56:03.757612 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-09-11 00:56:03.757623 | orchestrator | Thursday 11 September 2025 00:53:41 +0000 (0:00:00.629) 0:00:42.040 **** 2025-09-11 00:56:03.757634 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:56:03.757645 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:56:03.757655 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:56:03.757666 | orchestrator | 2025-09-11 00:56:03.757677 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-09-11 00:56:03.757688 | orchestrator | Thursday 11 September 2025 00:53:42 +0000 (0:00:00.396) 0:00:42.436 **** 2025-09-11 00:56:03.757698 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:56:03.757709 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:56:03.757720 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:56:03.757731 | orchestrator | 2025-09-11 00:56:03.757741 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-09-11 00:56:03.757752 | orchestrator | Thursday 11 September 2025 00:53:42 +0000 (0:00:00.395) 0:00:42.832 **** 2025-09-11 00:56:03.757763 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:56:03.757774 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:56:03.757785 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:56:03.757796 | orchestrator | 2025-09-11 00:56:03.757806 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-09-11 00:56:03.757817 | orchestrator | Thursday 11 September 2025 00:53:43 +0000 (0:00:00.435) 0:00:43.268 **** 2025-09-11 00:56:03.757834 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:56:03.757846 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:56:03.757856 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:56:03.757867 | orchestrator | 2025-09-11 00:56:03.757878 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-11 00:56:03.757889 | orchestrator | Thursday 11 September 2025 00:53:44 +0000 (0:00:00.824) 0:00:44.092 **** 2025-09-11 00:56:03.757900 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:56:03.757910 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:56:03.757928 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-09-11 00:56:03.757939 | orchestrator | 2025-09-11 00:56:03.757950 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-09-11 00:56:03.757960 | orchestrator | Thursday 11 September 2025 00:53:44 +0000 (0:00:00.365) 0:00:44.458 **** 2025-09-11 00:56:03.757971 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:56:03.757982 | orchestrator | 2025-09-11 00:56:03.757993 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-09-11 00:56:03.758003 | orchestrator | Thursday 11 September 2025 00:53:54 +0000 (0:00:09.887) 0:00:54.346 **** 2025-09-11 00:56:03.758014 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:56:03.758073 | orchestrator | 2025-09-11 00:56:03.758084 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-11 00:56:03.758095 | orchestrator | Thursday 11 September 2025 00:53:54 +0000 (0:00:00.142) 0:00:54.488 **** 2025-09-11 00:56:03.758106 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:56:03.758170 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:56:03.758183 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:56:03.758194 | orchestrator | 2025-09-11 00:56:03.758204 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-09-11 00:56:03.758215 | orchestrator | Thursday 11 September 2025 00:53:55 +0000 (0:00:00.924) 0:00:55.413 **** 2025-09-11 00:56:03.758226 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:56:03.758237 | orchestrator | 2025-09-11 00:56:03.758248 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-09-11 00:56:03.758258 | orchestrator | Thursday 11 September 2025 00:54:02 +0000 (0:00:07.457) 0:01:02.870 **** 2025-09-11 00:56:03.758269 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:56:03.758280 | orchestrator | 2025-09-11 00:56:03.758291 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-09-11 00:56:03.758301 | orchestrator | Thursday 11 September 2025 00:54:04 +0000 (0:00:01.563) 0:01:04.434 **** 2025-09-11 00:56:03.758312 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:56:03.758323 | orchestrator | 2025-09-11 00:56:03.758340 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-09-11 00:56:03.758351 | orchestrator | Thursday 11 September 2025 00:54:06 +0000 (0:00:02.593) 0:01:07.027 **** 2025-09-11 00:56:03.758362 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:56:03.758373 | orchestrator | 2025-09-11 00:56:03.758384 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-09-11 00:56:03.758394 | orchestrator | Thursday 11 September 2025 00:54:07 +0000 (0:00:00.132) 0:01:07.160 **** 2025-09-11 00:56:03.758405 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:56:03.758416 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:56:03.758427 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:56:03.758437 | orchestrator | 2025-09-11 00:56:03.758448 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-09-11 00:56:03.758459 | orchestrator | Thursday 11 September 2025 00:54:07 +0000 (0:00:00.316) 0:01:07.477 **** 2025-09-11 00:56:03.758469 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:56:03.758480 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-09-11 00:56:03.758491 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:56:03.758502 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:56:03.758512 | orchestrator | 2025-09-11 00:56:03.758523 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-09-11 00:56:03.758534 | orchestrator | skipping: no hosts matched 2025-09-11 00:56:03.758545 | orchestrator | 2025-09-11 00:56:03.758555 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-11 00:56:03.758566 | orchestrator | 2025-09-11 00:56:03.758577 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-11 00:56:03.758588 | orchestrator | Thursday 11 September 2025 00:54:07 +0000 (0:00:00.577) 0:01:08.054 **** 2025-09-11 00:56:03.758599 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:56:03.758615 | orchestrator | 2025-09-11 00:56:03.758626 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-11 00:56:03.758637 | orchestrator | Thursday 11 September 2025 00:54:26 +0000 (0:00:18.446) 0:01:26.501 **** 2025-09-11 00:56:03.758648 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:56:03.758658 | orchestrator | 2025-09-11 00:56:03.758669 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-11 00:56:03.758680 | orchestrator | Thursday 11 September 2025 00:54:47 +0000 (0:00:20.602) 0:01:47.103 **** 2025-09-11 00:56:03.758690 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:56:03.758700 | orchestrator | 2025-09-11 00:56:03.758709 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-11 00:56:03.758719 | orchestrator | 2025-09-11 00:56:03.758729 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-11 00:56:03.758738 | orchestrator | Thursday 11 September 2025 00:54:49 +0000 (0:00:02.261) 0:01:49.364 **** 2025-09-11 00:56:03.758748 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:56:03.758757 | orchestrator | 2025-09-11 00:56:03.758767 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-11 00:56:03.758776 | orchestrator | Thursday 11 September 2025 00:55:12 +0000 (0:00:23.343) 0:02:12.707 **** 2025-09-11 00:56:03.758786 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:56:03.758796 | orchestrator | 2025-09-11 00:56:03.758805 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-11 00:56:03.758815 | orchestrator | Thursday 11 September 2025 00:55:28 +0000 (0:00:15.581) 0:02:28.289 **** 2025-09-11 00:56:03.758825 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:56:03.758834 | orchestrator | 2025-09-11 00:56:03.758844 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-09-11 00:56:03.758853 | orchestrator | 2025-09-11 00:56:03.758870 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-11 00:56:03.758880 | orchestrator | Thursday 11 September 2025 00:55:30 +0000 (0:00:02.350) 0:02:30.639 **** 2025-09-11 00:56:03.758889 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:56:03.758899 | orchestrator | 2025-09-11 00:56:03.758908 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-11 00:56:03.758918 | orchestrator | Thursday 11 September 2025 00:55:41 +0000 (0:00:11.302) 0:02:41.941 **** 2025-09-11 00:56:03.758928 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:56:03.758937 | orchestrator | 2025-09-11 00:56:03.758947 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-11 00:56:03.758956 | orchestrator | Thursday 11 September 2025 00:55:46 +0000 (0:00:04.564) 0:02:46.506 **** 2025-09-11 00:56:03.758966 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:56:03.758976 | orchestrator | 2025-09-11 00:56:03.758985 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-09-11 00:56:03.758995 | orchestrator | 2025-09-11 00:56:03.759004 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-09-11 00:56:03.759014 | orchestrator | Thursday 11 September 2025 00:55:49 +0000 (0:00:02.615) 0:02:49.121 **** 2025-09-11 00:56:03.759023 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:56:03.759033 | orchestrator | 2025-09-11 00:56:03.759042 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-09-11 00:56:03.759052 | orchestrator | Thursday 11 September 2025 00:55:49 +0000 (0:00:00.510) 0:02:49.631 **** 2025-09-11 00:56:03.759061 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:56:03.759071 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:56:03.759081 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:56:03.759090 | orchestrator | 2025-09-11 00:56:03.759100 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-09-11 00:56:03.759109 | orchestrator | Thursday 11 September 2025 00:55:52 +0000 (0:00:02.514) 0:02:52.145 **** 2025-09-11 00:56:03.759139 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:56:03.759155 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:56:03.759165 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:56:03.759174 | orchestrator | 2025-09-11 00:56:03.759184 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-09-11 00:56:03.759194 | orchestrator | Thursday 11 September 2025 00:55:54 +0000 (0:00:02.313) 0:02:54.459 **** 2025-09-11 00:56:03.759203 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:56:03.759213 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:56:03.759227 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:56:03.759236 | orchestrator | 2025-09-11 00:56:03.759246 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-09-11 00:56:03.759256 | orchestrator | Thursday 11 September 2025 00:55:56 +0000 (0:00:02.121) 0:02:56.580 **** 2025-09-11 00:56:03.759265 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:56:03.759275 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:56:03.759284 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:56:03.759294 | orchestrator | 2025-09-11 00:56:03.759303 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-09-11 00:56:03.759313 | orchestrator | Thursday 11 September 2025 00:55:58 +0000 (0:00:02.103) 0:02:58.684 **** 2025-09-11 00:56:03.759322 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:56:03.759332 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:56:03.759342 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:56:03.759351 | orchestrator | 2025-09-11 00:56:03.759361 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-09-11 00:56:03.759370 | orchestrator | Thursday 11 September 2025 00:56:01 +0000 (0:00:02.815) 0:03:01.499 **** 2025-09-11 00:56:03.759380 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:56:03.759389 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:56:03.759399 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:56:03.759408 | orchestrator | 2025-09-11 00:56:03.759418 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:56:03.759427 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-11 00:56:03.759438 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-09-11 00:56:03.759449 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-11 00:56:03.759459 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-11 00:56:03.759468 | orchestrator | 2025-09-11 00:56:03.759478 | orchestrator | 2025-09-11 00:56:03.759488 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:56:03.759497 | orchestrator | Thursday 11 September 2025 00:56:01 +0000 (0:00:00.392) 0:03:01.892 **** 2025-09-11 00:56:03.759507 | orchestrator | =============================================================================== 2025-09-11 00:56:03.759516 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 41.79s 2025-09-11 00:56:03.759526 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 36.18s 2025-09-11 00:56:03.759535 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.30s 2025-09-11 00:56:03.759545 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.96s 2025-09-11 00:56:03.759554 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.89s 2025-09-11 00:56:03.759564 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.46s 2025-09-11 00:56:03.759578 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.61s 2025-09-11 00:56:03.759588 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.56s 2025-09-11 00:56:03.759604 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.95s 2025-09-11 00:56:03.759613 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.34s 2025-09-11 00:56:03.759623 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.87s 2025-09-11 00:56:03.759632 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.82s 2025-09-11 00:56:03.759642 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.73s 2025-09-11 00:56:03.759652 | orchestrator | Check MariaDB service --------------------------------------------------- 2.72s 2025-09-11 00:56:03.759661 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.63s 2025-09-11 00:56:03.759671 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.62s 2025-09-11 00:56:03.759680 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.59s 2025-09-11 00:56:03.759690 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.51s 2025-09-11 00:56:03.759700 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.34s 2025-09-11 00:56:03.759709 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.31s 2025-09-11 00:56:03.759719 | orchestrator | 2025-09-11 00:56:03 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:56:03.759729 | orchestrator | 2025-09-11 00:56:03 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:56:06.801969 | orchestrator | 2025-09-11 00:56:06 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:56:06.802652 | orchestrator | 2025-09-11 00:56:06 | INFO  | Task 91909f75-9667-481a-be23-b4cd6a469935 is in state STARTED 2025-09-11 00:56:06.803470 | orchestrator | 2025-09-11 00:56:06 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:56:06.803735 | orchestrator | 2025-09-11 00:56:06 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:56:09.846423 | orchestrator | 2025-09-11 00:56:09 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:56:09.846521 | orchestrator | 2025-09-11 00:56:09 | INFO  | Task 91909f75-9667-481a-be23-b4cd6a469935 is in state STARTED 2025-09-11 00:56:09.847282 | orchestrator | 2025-09-11 00:56:09 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:56:09.847308 | orchestrator | 2025-09-11 00:56:09 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:56:12.880906 | orchestrator | 2025-09-11 00:56:12 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:56:12.881333 | orchestrator | 2025-09-11 00:56:12 | INFO  | Task 91909f75-9667-481a-be23-b4cd6a469935 is in state STARTED 2025-09-11 00:56:12.882755 | orchestrator | 2025-09-11 00:56:12 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:56:12.882778 | orchestrator | 2025-09-11 00:56:12 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:56:15.911968 | orchestrator | 2025-09-11 00:56:15 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:56:15.913898 | orchestrator | 2025-09-11 00:56:15 | INFO  | Task 91909f75-9667-481a-be23-b4cd6a469935 is in state STARTED 2025-09-11 00:56:15.915757 | orchestrator | 2025-09-11 00:56:15 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:56:15.915782 | orchestrator | 2025-09-11 00:56:15 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:56:18.949343 | orchestrator | 2025-09-11 00:56:18 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:56:18.949932 | orchestrator | 2025-09-11 00:56:18 | INFO  | Task 91909f75-9667-481a-be23-b4cd6a469935 is in state STARTED 2025-09-11 00:56:18.951466 | orchestrator | 2025-09-11 00:56:18 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:56:18.951810 | orchestrator | 2025-09-11 00:56:18 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:56:21.978360 | orchestrator | 2025-09-11 00:56:21 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:56:21.979283 | orchestrator | 2025-09-11 00:56:21 | INFO  | Task 91909f75-9667-481a-be23-b4cd6a469935 is in state STARTED 2025-09-11 00:56:21.982807 | orchestrator | 2025-09-11 00:56:21 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:56:21.982846 | orchestrator | 2025-09-11 00:56:21 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:56:25.017973 | orchestrator | 2025-09-11 00:56:25 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:56:25.020747 | orchestrator | 2025-09-11 00:56:25 | INFO  | Task 91909f75-9667-481a-be23-b4cd6a469935 is in state STARTED 2025-09-11 00:56:25.023113 | orchestrator | 2025-09-11 00:56:25 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:56:25.023663 | orchestrator | 2025-09-11 00:56:25 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:56:28.055728 | orchestrator | 2025-09-11 00:56:28 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:56:28.056333 | orchestrator | 2025-09-11 00:56:28 | INFO  | Task 91909f75-9667-481a-be23-b4cd6a469935 is in state STARTED 2025-09-11 00:56:28.057657 | orchestrator | 2025-09-11 00:56:28 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:56:28.057682 | orchestrator | 2025-09-11 00:56:28 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:56:31.091334 | orchestrator | 2025-09-11 00:56:31 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:56:31.094239 | orchestrator | 2025-09-11 00:56:31 | INFO  | Task 91909f75-9667-481a-be23-b4cd6a469935 is in state STARTED 2025-09-11 00:56:31.098009 | orchestrator | 2025-09-11 00:56:31 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:56:31.098732 | orchestrator | 2025-09-11 00:56:31 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:56:34.134537 | orchestrator | 2025-09-11 00:56:34 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:56:34.138574 | orchestrator | 2025-09-11 00:56:34 | INFO  | Task 91909f75-9667-481a-be23-b4cd6a469935 is in state STARTED 2025-09-11 00:56:34.140070 | orchestrator | 2025-09-11 00:56:34 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:56:34.140239 | orchestrator | 2025-09-11 00:56:34 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:56:37.180044 | orchestrator | 2025-09-11 00:56:37 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:56:37.180461 | orchestrator | 2025-09-11 00:56:37 | INFO  | Task 91909f75-9667-481a-be23-b4cd6a469935 is in state STARTED 2025-09-11 00:56:37.181672 | orchestrator | 2025-09-11 00:56:37 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:56:37.181704 | orchestrator | 2025-09-11 00:56:37 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:56:40.226083 | orchestrator | 2025-09-11 00:56:40 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:56:40.228104 | orchestrator | 2025-09-11 00:56:40 | INFO  | Task 91909f75-9667-481a-be23-b4cd6a469935 is in state STARTED 2025-09-11 00:56:40.229838 | orchestrator | 2025-09-11 00:56:40 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:56:40.229864 | orchestrator | 2025-09-11 00:56:40 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:56:43.272886 | orchestrator | 2025-09-11 00:56:43 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:56:43.273695 | orchestrator | 2025-09-11 00:56:43 | INFO  | Task 91909f75-9667-481a-be23-b4cd6a469935 is in state STARTED 2025-09-11 00:56:43.275544 | orchestrator | 2025-09-11 00:56:43 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:56:43.275572 | orchestrator | 2025-09-11 00:56:43 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:56:46.318221 | orchestrator | 2025-09-11 00:56:46 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:56:46.319044 | orchestrator | 2025-09-11 00:56:46 | INFO  | Task 91909f75-9667-481a-be23-b4cd6a469935 is in state STARTED 2025-09-11 00:56:46.320507 | orchestrator | 2025-09-11 00:56:46 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:56:46.320532 | orchestrator | 2025-09-11 00:56:46 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:56:49.362942 | orchestrator | 2025-09-11 00:56:49 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:56:49.365842 | orchestrator | 2025-09-11 00:56:49 | INFO  | Task 91909f75-9667-481a-be23-b4cd6a469935 is in state STARTED 2025-09-11 00:56:49.368116 | orchestrator | 2025-09-11 00:56:49 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:56:49.368173 | orchestrator | 2025-09-11 00:56:49 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:56:52.412671 | orchestrator | 2025-09-11 00:56:52 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:56:52.413646 | orchestrator | 2025-09-11 00:56:52 | INFO  | Task 91909f75-9667-481a-be23-b4cd6a469935 is in state STARTED 2025-09-11 00:56:52.415517 | orchestrator | 2025-09-11 00:56:52 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:56:52.415545 | orchestrator | 2025-09-11 00:56:52 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:56:55.460817 | orchestrator | 2025-09-11 00:56:55 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:56:55.463394 | orchestrator | 2025-09-11 00:56:55 | INFO  | Task 91909f75-9667-481a-be23-b4cd6a469935 is in state STARTED 2025-09-11 00:56:55.465547 | orchestrator | 2025-09-11 00:56:55 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:56:55.465745 | orchestrator | 2025-09-11 00:56:55 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:56:58.501532 | orchestrator | 2025-09-11 00:56:58 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:56:58.504178 | orchestrator | 2025-09-11 00:56:58 | INFO  | Task 91909f75-9667-481a-be23-b4cd6a469935 is in state STARTED 2025-09-11 00:56:58.505421 | orchestrator | 2025-09-11 00:56:58 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:56:58.505591 | orchestrator | 2025-09-11 00:56:58 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:57:01.544865 | orchestrator | 2025-09-11 00:57:01 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:57:01.546599 | orchestrator | 2025-09-11 00:57:01 | INFO  | Task 91909f75-9667-481a-be23-b4cd6a469935 is in state STARTED 2025-09-11 00:57:01.548750 | orchestrator | 2025-09-11 00:57:01 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:57:01.549053 | orchestrator | 2025-09-11 00:57:01 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:57:04.584548 | orchestrator | 2025-09-11 00:57:04 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:57:04.586070 | orchestrator | 2025-09-11 00:57:04 | INFO  | Task 91909f75-9667-481a-be23-b4cd6a469935 is in state STARTED 2025-09-11 00:57:04.587352 | orchestrator | 2025-09-11 00:57:04 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:57:04.587636 | orchestrator | 2025-09-11 00:57:04 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:57:07.619374 | orchestrator | 2025-09-11 00:57:07 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:57:07.621393 | orchestrator | 2025-09-11 00:57:07 | INFO  | Task 91909f75-9667-481a-be23-b4cd6a469935 is in state STARTED 2025-09-11 00:57:07.623096 | orchestrator | 2025-09-11 00:57:07 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:57:07.623183 | orchestrator | 2025-09-11 00:57:07 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:57:10.661439 | orchestrator | 2025-09-11 00:57:10 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:57:10.663249 | orchestrator | 2025-09-11 00:57:10 | INFO  | Task 91909f75-9667-481a-be23-b4cd6a469935 is in state STARTED 2025-09-11 00:57:10.664769 | orchestrator | 2025-09-11 00:57:10 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state STARTED 2025-09-11 00:57:10.664907 | orchestrator | 2025-09-11 00:57:10 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:57:13.710837 | orchestrator | 2025-09-11 00:57:13 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:57:13.711623 | orchestrator | 2025-09-11 00:57:13 | INFO  | Task 91909f75-9667-481a-be23-b4cd6a469935 is in state STARTED 2025-09-11 00:57:13.713079 | orchestrator | 2025-09-11 00:57:13 | INFO  | Task 80d43de5-13f7-4a4c-ab5a-4beceecff3e9 is in state STARTED 2025-09-11 00:57:13.716239 | orchestrator | 2025-09-11 00:57:13 | INFO  | Task 359aacdb-e76a-48e3-9bb8-cbd28816ce8f is in state SUCCESS 2025-09-11 00:57:13.718288 | orchestrator | 2025-09-11 00:57:13.718392 | orchestrator | 2025-09-11 00:57:13.718405 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-09-11 00:57:13.718416 | orchestrator | 2025-09-11 00:57:13.718425 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-11 00:57:13.718436 | orchestrator | Thursday 11 September 2025 00:55:04 +0000 (0:00:00.602) 0:00:00.602 **** 2025-09-11 00:57:13.718509 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:57:13.718521 | orchestrator | 2025-09-11 00:57:13.718529 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-11 00:57:13.718538 | orchestrator | Thursday 11 September 2025 00:55:05 +0000 (0:00:00.636) 0:00:01.238 **** 2025-09-11 00:57:13.718546 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:57:13.718555 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:57:13.718562 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:57:13.718570 | orchestrator | 2025-09-11 00:57:13.718578 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-11 00:57:13.718586 | orchestrator | Thursday 11 September 2025 00:55:06 +0000 (0:00:00.631) 0:00:01.870 **** 2025-09-11 00:57:13.718594 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:57:13.718602 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:57:13.718610 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:57:13.718617 | orchestrator | 2025-09-11 00:57:13.718625 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-11 00:57:13.718633 | orchestrator | Thursday 11 September 2025 00:55:06 +0000 (0:00:00.265) 0:00:02.136 **** 2025-09-11 00:57:13.718667 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:57:13.718682 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:57:13.718695 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:57:13.718709 | orchestrator | 2025-09-11 00:57:13.718723 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-11 00:57:13.718738 | orchestrator | Thursday 11 September 2025 00:55:07 +0000 (0:00:00.654) 0:00:02.790 **** 2025-09-11 00:57:13.718752 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:57:13.718765 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:57:13.718774 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:57:13.718788 | orchestrator | 2025-09-11 00:57:13.718801 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-11 00:57:13.718814 | orchestrator | Thursday 11 September 2025 00:55:07 +0000 (0:00:00.257) 0:00:03.047 **** 2025-09-11 00:57:13.718827 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:57:13.718840 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:57:13.718855 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:57:13.718867 | orchestrator | 2025-09-11 00:57:13.718875 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-11 00:57:13.718884 | orchestrator | Thursday 11 September 2025 00:55:07 +0000 (0:00:00.260) 0:00:03.308 **** 2025-09-11 00:57:13.718897 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:57:13.718910 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:57:13.718924 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:57:13.719475 | orchestrator | 2025-09-11 00:57:13.719497 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-11 00:57:13.719505 | orchestrator | Thursday 11 September 2025 00:55:07 +0000 (0:00:00.279) 0:00:03.588 **** 2025-09-11 00:57:13.719514 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:57:13.719522 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:57:13.719530 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:57:13.719538 | orchestrator | 2025-09-11 00:57:13.719556 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-11 00:57:13.719564 | orchestrator | Thursday 11 September 2025 00:55:08 +0000 (0:00:00.373) 0:00:03.962 **** 2025-09-11 00:57:13.719572 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:57:13.719580 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:57:13.719588 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:57:13.719595 | orchestrator | 2025-09-11 00:57:13.719603 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-11 00:57:13.719611 | orchestrator | Thursday 11 September 2025 00:55:08 +0000 (0:00:00.253) 0:00:04.215 **** 2025-09-11 00:57:13.719619 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-11 00:57:13.719627 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-11 00:57:13.719634 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-11 00:57:13.719642 | orchestrator | 2025-09-11 00:57:13.719650 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-11 00:57:13.719658 | orchestrator | Thursday 11 September 2025 00:55:09 +0000 (0:00:00.601) 0:00:04.817 **** 2025-09-11 00:57:13.719666 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:57:13.719674 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:57:13.719681 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:57:13.719690 | orchestrator | 2025-09-11 00:57:13.719698 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-11 00:57:13.719705 | orchestrator | Thursday 11 September 2025 00:55:09 +0000 (0:00:00.347) 0:00:05.164 **** 2025-09-11 00:57:13.719713 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-11 00:57:13.719721 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-11 00:57:13.719729 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-11 00:57:13.719746 | orchestrator | 2025-09-11 00:57:13.719754 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-11 00:57:13.719762 | orchestrator | Thursday 11 September 2025 00:55:11 +0000 (0:00:01.982) 0:00:07.147 **** 2025-09-11 00:57:13.719769 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-11 00:57:13.719778 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-11 00:57:13.719785 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-11 00:57:13.719793 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:57:13.719801 | orchestrator | 2025-09-11 00:57:13.719809 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-11 00:57:13.719848 | orchestrator | Thursday 11 September 2025 00:55:11 +0000 (0:00:00.376) 0:00:07.524 **** 2025-09-11 00:57:13.719858 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.719868 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.719877 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.719885 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:57:13.719893 | orchestrator | 2025-09-11 00:57:13.719900 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-11 00:57:13.719908 | orchestrator | Thursday 11 September 2025 00:55:12 +0000 (0:00:00.702) 0:00:08.226 **** 2025-09-11 00:57:13.719918 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.719928 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.719940 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.719949 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:57:13.719956 | orchestrator | 2025-09-11 00:57:13.719964 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-11 00:57:13.719972 | orchestrator | Thursday 11 September 2025 00:55:12 +0000 (0:00:00.143) 0:00:08.369 **** 2025-09-11 00:57:13.719982 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '42af05e8058c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-11 00:55:10.158532', 'end': '2025-09-11 00:55:10.204856', 'delta': '0:00:00.046324', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['42af05e8058c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-09-11 00:57:13.719997 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '158cb4002f85', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-11 00:55:10.822244', 'end': '2025-09-11 00:55:10.857386', 'delta': '0:00:00.035142', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['158cb4002f85'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-09-11 00:57:13.720028 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'eeed184c899f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-11 00:55:11.335248', 'end': '2025-09-11 00:55:11.379833', 'delta': '0:00:00.044585', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['eeed184c899f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-09-11 00:57:13.720038 | orchestrator | 2025-09-11 00:57:13.720046 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-11 00:57:13.720054 | orchestrator | Thursday 11 September 2025 00:55:13 +0000 (0:00:00.349) 0:00:08.719 **** 2025-09-11 00:57:13.720061 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:57:13.720069 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:57:13.720077 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:57:13.720085 | orchestrator | 2025-09-11 00:57:13.720093 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-11 00:57:13.720103 | orchestrator | Thursday 11 September 2025 00:55:13 +0000 (0:00:00.409) 0:00:09.128 **** 2025-09-11 00:57:13.720112 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-09-11 00:57:13.720140 | orchestrator | 2025-09-11 00:57:13.720155 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-11 00:57:13.720170 | orchestrator | Thursday 11 September 2025 00:55:15 +0000 (0:00:01.771) 0:00:10.900 **** 2025-09-11 00:57:13.720183 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:57:13.720192 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:57:13.720201 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:57:13.720210 | orchestrator | 2025-09-11 00:57:13.720219 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-11 00:57:13.720228 | orchestrator | Thursday 11 September 2025 00:55:15 +0000 (0:00:00.283) 0:00:11.184 **** 2025-09-11 00:57:13.720237 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:57:13.720246 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:57:13.720254 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:57:13.720264 | orchestrator | 2025-09-11 00:57:13.720273 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-11 00:57:13.720281 | orchestrator | Thursday 11 September 2025 00:55:15 +0000 (0:00:00.412) 0:00:11.597 **** 2025-09-11 00:57:13.720290 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:57:13.720299 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:57:13.720308 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:57:13.720316 | orchestrator | 2025-09-11 00:57:13.720325 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-11 00:57:13.720340 | orchestrator | Thursday 11 September 2025 00:55:16 +0000 (0:00:00.432) 0:00:12.029 **** 2025-09-11 00:57:13.720349 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:57:13.720358 | orchestrator | 2025-09-11 00:57:13.720371 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-11 00:57:13.720380 | orchestrator | Thursday 11 September 2025 00:55:16 +0000 (0:00:00.137) 0:00:12.166 **** 2025-09-11 00:57:13.720389 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:57:13.720398 | orchestrator | 2025-09-11 00:57:13.720407 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-11 00:57:13.720416 | orchestrator | Thursday 11 September 2025 00:55:16 +0000 (0:00:00.223) 0:00:12.390 **** 2025-09-11 00:57:13.720425 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:57:13.720434 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:57:13.720442 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:57:13.720451 | orchestrator | 2025-09-11 00:57:13.720459 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-11 00:57:13.720467 | orchestrator | Thursday 11 September 2025 00:55:17 +0000 (0:00:00.296) 0:00:12.686 **** 2025-09-11 00:57:13.720475 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:57:13.720483 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:57:13.720490 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:57:13.720498 | orchestrator | 2025-09-11 00:57:13.720506 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-11 00:57:13.720513 | orchestrator | Thursday 11 September 2025 00:55:17 +0000 (0:00:00.318) 0:00:13.005 **** 2025-09-11 00:57:13.720521 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:57:13.720529 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:57:13.720537 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:57:13.720545 | orchestrator | 2025-09-11 00:57:13.720552 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-11 00:57:13.720560 | orchestrator | Thursday 11 September 2025 00:55:17 +0000 (0:00:00.471) 0:00:13.477 **** 2025-09-11 00:57:13.720568 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:57:13.720575 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:57:13.720583 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:57:13.720591 | orchestrator | 2025-09-11 00:57:13.720599 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-11 00:57:13.720606 | orchestrator | Thursday 11 September 2025 00:55:18 +0000 (0:00:00.297) 0:00:13.774 **** 2025-09-11 00:57:13.720614 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:57:13.720622 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:57:13.720630 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:57:13.720637 | orchestrator | 2025-09-11 00:57:13.720645 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-11 00:57:13.720653 | orchestrator | Thursday 11 September 2025 00:55:18 +0000 (0:00:00.289) 0:00:14.064 **** 2025-09-11 00:57:13.720661 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:57:13.720668 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:57:13.720678 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:57:13.720692 | orchestrator | 2025-09-11 00:57:13.720705 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-11 00:57:13.720753 | orchestrator | Thursday 11 September 2025 00:55:18 +0000 (0:00:00.298) 0:00:14.362 **** 2025-09-11 00:57:13.720770 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:57:13.720783 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:57:13.720796 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:57:13.720805 | orchestrator | 2025-09-11 00:57:13.720819 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-11 00:57:13.720832 | orchestrator | Thursday 11 September 2025 00:55:19 +0000 (0:00:00.467) 0:00:14.829 **** 2025-09-11 00:57:13.720846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7f9f8cff--4bc3--57f6--8883--7f2afe56eba7-osd--block--7f9f8cff--4bc3--57f6--8883--7f2afe56eba7', 'dm-uuid-LVM-FFRmGyjMJjwyBNPX0mtgwKT08Ec5j8nKiYegaTkdjBNxanDroHG5paqF8aLIOfpq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-11 00:57:13.720869 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0befa402--ebd4--5a4e--889f--8c71805f12b9-osd--block--0befa402--ebd4--5a4e--889f--8c71805f12b9', 'dm-uuid-LVM-O83ZxIZ2HKgtn3sHkfxrsbtDwkQwlfAEliHJePbC1pFTz2a2NqegeiNoBPqKriB7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-11 00:57:13.720885 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:57:13.720905 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:57:13.720917 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:57:13.720930 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:57:13.720944 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:57:13.720994 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:57:13.721010 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:57:13.721032 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:57:13.721054 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c', 'scsi-SQEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c-part1', 'scsi-SQEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c-part14', 'scsi-SQEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c-part15', 'scsi-SQEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c-part16', 'scsi-SQEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-11 00:57:13.721072 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7f9f8cff--4bc3--57f6--8883--7f2afe56eba7-osd--block--7f9f8cff--4bc3--57f6--8883--7f2afe56eba7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9brBhl-9rD0-lF3D-tvrO-c62Q-KEhm-mK5VDE', 'scsi-0QEMU_QEMU_HARDDISK_64a40e05-4c55-4984-8320-b8e17729d0c1', 'scsi-SQEMU_QEMU_HARDDISK_64a40e05-4c55-4984-8320-b8e17729d0c1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-11 00:57:13.721208 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--344fe78f--9b90--543d--a55e--ac4ca1a09e29-osd--block--344fe78f--9b90--543d--a55e--ac4ca1a09e29', 'dm-uuid-LVM-IcFdCfbs0J7lgVCaqy5mU1XDVu6CMknWjfIYlTGUrKo82NMO30nTpFLtBtTp4JTM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-11 00:57:13.721257 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--0befa402--ebd4--5a4e--889f--8c71805f12b9-osd--block--0befa402--ebd4--5a4e--889f--8c71805f12b9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VoFX0M-o1D9-F2dj-bYbg-UG6C-HSx3-9gasFd', 'scsi-0QEMU_QEMU_HARDDISK_a8a1b225-42a0-4e26-b86d-f2993393243d', 'scsi-SQEMU_QEMU_HARDDISK_a8a1b225-42a0-4e26-b86d-f2993393243d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-11 00:57:13.721267 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4b4178b7--2f3b--5f27--b2b6--7c3306310ac2-osd--block--4b4178b7--2f3b--5f27--b2b6--7c3306310ac2', 'dm-uuid-LVM-MSkNI7CPgw2rIqMkS0ULAlD4N133FmUTgRMv5M7TWonhKc6ByYfwxwZuP8Jgb8yR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-11 00:57:13.721280 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816fbb7-1f19-4e1f-8cd2-a94f4cca07a1', 'scsi-SQEMU_QEMU_HARDDISK_8816fbb7-1f19-4e1f-8cd2-a94f4cca07a1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-11 00:57:13.721289 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:57:13.721299 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-11-00-01-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-11 00:57:13.721307 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:57:13.721358 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:57:13.721373 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:57:13.721381 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:57:13.721389 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:57:13.721397 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:57:13.721405 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:57:13.721427 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e', 'scsi-SQEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e-part1', 'scsi-SQEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e-part14', 'scsi-SQEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e-part15', 'scsi-SQEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e-part16', 'scsi-SQEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-11 00:57:13.721444 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--344fe78f--9b90--543d--a55e--ac4ca1a09e29-osd--block--344fe78f--9b90--543d--a55e--ac4ca1a09e29'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-B7gVWm-Wojy-a5dV-L0Tu-SnNR-y9ze-ex6u97', 'scsi-0QEMU_QEMU_HARDDISK_2cf4645b-2040-4422-b411-f526d3d4b2d7', 'scsi-SQEMU_QEMU_HARDDISK_2cf4645b-2040-4422-b411-f526d3d4b2d7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-11 00:57:13.721453 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--4b4178b7--2f3b--5f27--b2b6--7c3306310ac2-osd--block--4b4178b7--2f3b--5f27--b2b6--7c3306310ac2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Hov3a8-9iDi-As7D-PKbZ-qStP-xNc3-IUTecg', 'scsi-0QEMU_QEMU_HARDDISK_8046510b-ad40-4feb-b71a-a7eb3fa57256', 'scsi-SQEMU_QEMU_HARDDISK_8046510b-ad40-4feb-b71a-a7eb3fa57256'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-11 00:57:13.721462 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:57:13.721473 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4713353-19f0-445a-bb8a-6a961d38a233', 'scsi-SQEMU_QEMU_HARDDISK_d4713353-19f0-445a-bb8a-6a961d38a233'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-11 00:57:13.721482 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-11-00-01-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-11 00:57:13.721490 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:57:13.721498 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1fcfbff8--db79--5f3f--a505--ec8e716f38d6-osd--block--1fcfbff8--db79--5f3f--a505--ec8e716f38d6', 'dm-uuid-LVM-BUfVwgnon6sZxiupdPkh7tHhQxfkU9wrcQv6EDIHaeS4TSVQjVRY6qHh53bV7eGO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-11 00:57:13.721517 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8a3e2512--7b8b--5f78--845d--17a09314c972-osd--block--8a3e2512--7b8b--5f78--845d--17a09314c972', 'dm-uuid-LVM-EVgkE7S10cRafvZbRO9DwQh3tt2BT98I9ULTHcbZnJGBSXwTw3BzEjDQvaVxVDdW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-11 00:57:13.721526 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:57:13.721534 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:57:13.721542 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:57:13.721550 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:57:13.721562 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:57:13.721570 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:57:13.721578 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:57:13.721586 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-11 00:57:13.721605 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc', 'scsi-SQEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc-part1', 'scsi-SQEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc-part14', 'scsi-SQEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc-part15', 'scsi-SQEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc-part16', 'scsi-SQEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-11 00:57:13.721618 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--1fcfbff8--db79--5f3f--a505--ec8e716f38d6-osd--block--1fcfbff8--db79--5f3f--a505--ec8e716f38d6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-A6YIqM-Tf5f-HngE-xa6U-QuYg-PMeg-ro81Ui', 'scsi-0QEMU_QEMU_HARDDISK_f923a2d7-e50a-4a10-a63c-46b2772477f3', 'scsi-SQEMU_QEMU_HARDDISK_f923a2d7-e50a-4a10-a63c-46b2772477f3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-11 00:57:13.721627 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8a3e2512--7b8b--5f78--845d--17a09314c972-osd--block--8a3e2512--7b8b--5f78--845d--17a09314c972'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1OzxHd-LuzN-1Wex-64m7-1qYQ-8vcr-kBg1JM', 'scsi-0QEMU_QEMU_HARDDISK_75033e65-6f8e-4260-8d0b-0f414b2e283a', 'scsi-SQEMU_QEMU_HARDDISK_75033e65-6f8e-4260-8d0b-0f414b2e283a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-11 00:57:13.721636 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e2337cd-4ca4-43ee-9815-6c22aae7aa7a', 'scsi-SQEMU_QEMU_HARDDISK_7e2337cd-4ca4-43ee-9815-6c22aae7aa7a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-11 00:57:13.721654 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-11-00-01-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-11 00:57:13.721663 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:57:13.721671 | orchestrator | 2025-09-11 00:57:13.721679 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-11 00:57:13.721687 | orchestrator | Thursday 11 September 2025 00:55:19 +0000 (0:00:00.584) 0:00:15.414 **** 2025-09-11 00:57:13.721695 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7f9f8cff--4bc3--57f6--8883--7f2afe56eba7-osd--block--7f9f8cff--4bc3--57f6--8883--7f2afe56eba7', 'dm-uuid-LVM-FFRmGyjMJjwyBNPX0mtgwKT08Ec5j8nKiYegaTkdjBNxanDroHG5paqF8aLIOfpq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.721704 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0befa402--ebd4--5a4e--889f--8c71805f12b9-osd--block--0befa402--ebd4--5a4e--889f--8c71805f12b9', 'dm-uuid-LVM-O83ZxIZ2HKgtn3sHkfxrsbtDwkQwlfAEliHJePbC1pFTz2a2NqegeiNoBPqKriB7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.721715 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.721724 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.721737 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.721750 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.721759 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.721767 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.721775 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.721787 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--344fe78f--9b90--543d--a55e--ac4ca1a09e29-osd--block--344fe78f--9b90--543d--a55e--ac4ca1a09e29', 'dm-uuid-LVM-IcFdCfbs0J7lgVCaqy5mU1XDVu6CMknWjfIYlTGUrKo82NMO30nTpFLtBtTp4JTM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.721800 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.721813 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4b4178b7--2f3b--5f27--b2b6--7c3306310ac2-osd--block--4b4178b7--2f3b--5f27--b2b6--7c3306310ac2', 'dm-uuid-LVM-MSkNI7CPgw2rIqMkS0ULAlD4N133FmUTgRMv5M7TWonhKc6ByYfwxwZuP8Jgb8yR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.721826 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c', 'scsi-SQEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c-part1', 'scsi-SQEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c-part14', 'scsi-SQEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c-part15', 'scsi-SQEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c-part16', 'scsi-SQEMU_QEMU_HARDDISK_024b17c6-95a2-4ffb-8436-08c360ae905c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.721835 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.721849 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--7f9f8cff--4bc3--57f6--8883--7f2afe56eba7-osd--block--7f9f8cff--4bc3--57f6--8883--7f2afe56eba7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9brBhl-9rD0-lF3D-tvrO-c62Q-KEhm-mK5VDE', 'scsi-0QEMU_QEMU_HARDDISK_64a40e05-4c55-4984-8320-b8e17729d0c1', 'scsi-SQEMU_QEMU_HARDDISK_64a40e05-4c55-4984-8320-b8e17729d0c1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.721862 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.721871 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--0befa402--ebd4--5a4e--889f--8c71805f12b9-osd--block--0befa402--ebd4--5a4e--889f--8c71805f12b9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VoFX0M-o1D9-F2dj-bYbg-UG6C-HSx3-9gasFd', 'scsi-0QEMU_QEMU_HARDDISK_a8a1b225-42a0-4e26-b86d-f2993393243d', 'scsi-SQEMU_QEMU_HARDDISK_a8a1b225-42a0-4e26-b86d-f2993393243d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.721879 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.721891 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816fbb7-1f19-4e1f-8cd2-a94f4cca07a1', 'scsi-SQEMU_QEMU_HARDDISK_8816fbb7-1f19-4e1f-8cd2-a94f4cca07a1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.721906 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.721919 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-11-00-01-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.721927 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:57:13.721936 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.721944 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.721952 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.721964 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.721983 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e', 'scsi-SQEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e-part1', 'scsi-SQEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e-part14', 'scsi-SQEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e-part15', 'scsi-SQEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e-part16', 'scsi-SQEMU_QEMU_HARDDISK_d34336cb-70d0-416f-8e5b-d5d62ae7b30e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.721993 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--344fe78f--9b90--543d--a55e--ac4ca1a09e29-osd--block--344fe78f--9b90--543d--a55e--ac4ca1a09e29'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-B7gVWm-Wojy-a5dV-L0Tu-SnNR-y9ze-ex6u97', 'scsi-0QEMU_QEMU_HARDDISK_2cf4645b-2040-4422-b411-f526d3d4b2d7', 'scsi-SQEMU_QEMU_HARDDISK_2cf4645b-2040-4422-b411-f526d3d4b2d7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.722005 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--4b4178b7--2f3b--5f27--b2b6--7c3306310ac2-osd--block--4b4178b7--2f3b--5f27--b2b6--7c3306310ac2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Hov3a8-9iDi-As7D-PKbZ-qStP-xNc3-IUTecg', 'scsi-0QEMU_QEMU_HARDDISK_8046510b-ad40-4feb-b71a-a7eb3fa57256', 'scsi-SQEMU_QEMU_HARDDISK_8046510b-ad40-4feb-b71a-a7eb3fa57256'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.722043 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4713353-19f0-445a-bb8a-6a961d38a233', 'scsi-SQEMU_QEMU_HARDDISK_d4713353-19f0-445a-bb8a-6a961d38a233'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.722059 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-11-00-01-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.722067 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:57:13.722075 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1fcfbff8--db79--5f3f--a505--ec8e716f38d6-osd--block--1fcfbff8--db79--5f3f--a505--ec8e716f38d6', 'dm-uuid-LVM-BUfVwgnon6sZxiupdPkh7tHhQxfkU9wrcQv6EDIHaeS4TSVQjVRY6qHh53bV7eGO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.722084 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8a3e2512--7b8b--5f78--845d--17a09314c972-osd--block--8a3e2512--7b8b--5f78--845d--17a09314c972', 'dm-uuid-LVM-EVgkE7S10cRafvZbRO9DwQh3tt2BT98I9ULTHcbZnJGBSXwTw3BzEjDQvaVxVDdW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.722095 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.722109 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.722118 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.722176 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.722185 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.722193 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.722202 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.722218 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.722232 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc', 'scsi-SQEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc-part1', 'scsi-SQEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc-part14', 'scsi-SQEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc-part15', 'scsi-SQEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc-part16', 'scsi-SQEMU_QEMU_HARDDISK_cb0d1a75-4eb9-4c16-ac53-8c9c16c342cc-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.722241 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--1fcfbff8--db79--5f3f--a505--ec8e716f38d6-osd--block--1fcfbff8--db79--5f3f--a505--ec8e716f38d6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-A6YIqM-Tf5f-HngE-xa6U-QuYg-PMeg-ro81Ui', 'scsi-0QEMU_QEMU_HARDDISK_f923a2d7-e50a-4a10-a63c-46b2772477f3', 'scsi-SQEMU_QEMU_HARDDISK_f923a2d7-e50a-4a10-a63c-46b2772477f3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.722257 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8a3e2512--7b8b--5f78--845d--17a09314c972-osd--block--8a3e2512--7b8b--5f78--845d--17a09314c972'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1OzxHd-LuzN-1Wex-64m7-1qYQ-8vcr-kBg1JM', 'scsi-0QEMU_QEMU_HARDDISK_75033e65-6f8e-4260-8d0b-0f414b2e283a', 'scsi-SQEMU_QEMU_HARDDISK_75033e65-6f8e-4260-8d0b-0f414b2e283a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.722266 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e2337cd-4ca4-43ee-9815-6c22aae7aa7a', 'scsi-SQEMU_QEMU_HARDDISK_7e2337cd-4ca4-43ee-9815-6c22aae7aa7a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.722278 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-11-00-01-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-11 00:57:13.722287 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:57:13.722295 | orchestrator | 2025-09-11 00:57:13.722303 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-11 00:57:13.722311 | orchestrator | Thursday 11 September 2025 00:55:20 +0000 (0:00:00.603) 0:00:16.017 **** 2025-09-11 00:57:13.722319 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:57:13.722327 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:57:13.722335 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:57:13.722343 | orchestrator | 2025-09-11 00:57:13.722351 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-11 00:57:13.722358 | orchestrator | Thursday 11 September 2025 00:55:21 +0000 (0:00:00.722) 0:00:16.740 **** 2025-09-11 00:57:13.722366 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:57:13.722374 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:57:13.722382 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:57:13.722390 | orchestrator | 2025-09-11 00:57:13.722397 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-11 00:57:13.722405 | orchestrator | Thursday 11 September 2025 00:55:21 +0000 (0:00:00.423) 0:00:17.163 **** 2025-09-11 00:57:13.722413 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:57:13.722421 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:57:13.722429 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:57:13.722436 | orchestrator | 2025-09-11 00:57:13.722444 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-11 00:57:13.722456 | orchestrator | Thursday 11 September 2025 00:55:22 +0000 (0:00:00.627) 0:00:17.791 **** 2025-09-11 00:57:13.722465 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:57:13.722472 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:57:13.722480 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:57:13.722488 | orchestrator | 2025-09-11 00:57:13.722496 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-11 00:57:13.722504 | orchestrator | Thursday 11 September 2025 00:55:22 +0000 (0:00:00.282) 0:00:18.074 **** 2025-09-11 00:57:13.722511 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:57:13.722519 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:57:13.722527 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:57:13.722535 | orchestrator | 2025-09-11 00:57:13.722543 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-11 00:57:13.722550 | orchestrator | Thursday 11 September 2025 00:55:22 +0000 (0:00:00.387) 0:00:18.461 **** 2025-09-11 00:57:13.722558 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:57:13.722566 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:57:13.722574 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:57:13.722582 | orchestrator | 2025-09-11 00:57:13.722590 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-11 00:57:13.722597 | orchestrator | Thursday 11 September 2025 00:55:23 +0000 (0:00:00.457) 0:00:18.919 **** 2025-09-11 00:57:13.722605 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-11 00:57:13.722617 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-11 00:57:13.722625 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-11 00:57:13.722633 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-11 00:57:13.722641 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-11 00:57:13.722649 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-11 00:57:13.722656 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-11 00:57:13.722664 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-11 00:57:13.722672 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-11 00:57:13.722680 | orchestrator | 2025-09-11 00:57:13.722688 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-11 00:57:13.722696 | orchestrator | Thursday 11 September 2025 00:55:24 +0000 (0:00:00.798) 0:00:19.717 **** 2025-09-11 00:57:13.722704 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-11 00:57:13.722711 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-11 00:57:13.722719 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-11 00:57:13.722727 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:57:13.722735 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-11 00:57:13.722742 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-11 00:57:13.722750 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-11 00:57:13.722758 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:57:13.722765 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-11 00:57:13.722773 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-11 00:57:13.722781 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-11 00:57:13.722789 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:57:13.722796 | orchestrator | 2025-09-11 00:57:13.722804 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-11 00:57:13.722812 | orchestrator | Thursday 11 September 2025 00:55:24 +0000 (0:00:00.346) 0:00:20.063 **** 2025-09-11 00:57:13.722820 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 00:57:13.722828 | orchestrator | 2025-09-11 00:57:13.722836 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-11 00:57:13.722849 | orchestrator | Thursday 11 September 2025 00:55:25 +0000 (0:00:00.705) 0:00:20.768 **** 2025-09-11 00:57:13.722857 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:57:13.722865 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:57:13.722873 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:57:13.722881 | orchestrator | 2025-09-11 00:57:13.722892 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-11 00:57:13.722901 | orchestrator | Thursday 11 September 2025 00:55:25 +0000 (0:00:00.305) 0:00:21.074 **** 2025-09-11 00:57:13.722908 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:57:13.722916 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:57:13.722924 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:57:13.722932 | orchestrator | 2025-09-11 00:57:13.722940 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-11 00:57:13.722948 | orchestrator | Thursday 11 September 2025 00:55:25 +0000 (0:00:00.305) 0:00:21.379 **** 2025-09-11 00:57:13.722956 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:57:13.722963 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:57:13.722971 | orchestrator | skipping: [testbed-node-5] 2025-09-11 00:57:13.722979 | orchestrator | 2025-09-11 00:57:13.722987 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-11 00:57:13.722994 | orchestrator | Thursday 11 September 2025 00:55:26 +0000 (0:00:00.292) 0:00:21.672 **** 2025-09-11 00:57:13.723002 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:57:13.723010 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:57:13.723018 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:57:13.723026 | orchestrator | 2025-09-11 00:57:13.723034 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-11 00:57:13.723041 | orchestrator | Thursday 11 September 2025 00:55:26 +0000 (0:00:00.532) 0:00:22.205 **** 2025-09-11 00:57:13.723049 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-11 00:57:13.723057 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-11 00:57:13.723065 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-11 00:57:13.723073 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:57:13.723080 | orchestrator | 2025-09-11 00:57:13.723088 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-11 00:57:13.723096 | orchestrator | Thursday 11 September 2025 00:55:26 +0000 (0:00:00.357) 0:00:22.562 **** 2025-09-11 00:57:13.723104 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-11 00:57:13.723111 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-11 00:57:13.723133 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-11 00:57:13.723141 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:57:13.723149 | orchestrator | 2025-09-11 00:57:13.723157 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-11 00:57:13.723165 | orchestrator | Thursday 11 September 2025 00:55:27 +0000 (0:00:00.366) 0:00:22.929 **** 2025-09-11 00:57:13.723172 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-11 00:57:13.723180 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-11 00:57:13.723188 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-11 00:57:13.723196 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:57:13.723204 | orchestrator | 2025-09-11 00:57:13.723211 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-11 00:57:13.723219 | orchestrator | Thursday 11 September 2025 00:55:27 +0000 (0:00:00.355) 0:00:23.285 **** 2025-09-11 00:57:13.723227 | orchestrator | ok: [testbed-node-3] 2025-09-11 00:57:13.723238 | orchestrator | ok: [testbed-node-4] 2025-09-11 00:57:13.723246 | orchestrator | ok: [testbed-node-5] 2025-09-11 00:57:13.723254 | orchestrator | 2025-09-11 00:57:13.723262 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-11 00:57:13.723277 | orchestrator | Thursday 11 September 2025 00:55:27 +0000 (0:00:00.324) 0:00:23.609 **** 2025-09-11 00:57:13.723285 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-11 00:57:13.723292 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-11 00:57:13.723300 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-11 00:57:13.723308 | orchestrator | 2025-09-11 00:57:13.723316 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-11 00:57:13.723324 | orchestrator | Thursday 11 September 2025 00:55:28 +0000 (0:00:00.492) 0:00:24.101 **** 2025-09-11 00:57:13.723332 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-11 00:57:13.723340 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-11 00:57:13.723348 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-11 00:57:13.723355 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-11 00:57:13.723363 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-11 00:57:13.723371 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-11 00:57:13.723379 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-11 00:57:13.723387 | orchestrator | 2025-09-11 00:57:13.723395 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-11 00:57:13.723402 | orchestrator | Thursday 11 September 2025 00:55:29 +0000 (0:00:00.924) 0:00:25.026 **** 2025-09-11 00:57:13.723410 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-11 00:57:13.723418 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-11 00:57:13.723426 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-11 00:57:13.723433 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-11 00:57:13.723441 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-11 00:57:13.723449 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-11 00:57:13.723457 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-11 00:57:13.723465 | orchestrator | 2025-09-11 00:57:13.723476 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-09-11 00:57:13.723485 | orchestrator | Thursday 11 September 2025 00:55:31 +0000 (0:00:01.860) 0:00:26.886 **** 2025-09-11 00:57:13.723492 | orchestrator | skipping: [testbed-node-3] 2025-09-11 00:57:13.723500 | orchestrator | skipping: [testbed-node-4] 2025-09-11 00:57:13.723508 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-09-11 00:57:13.723516 | orchestrator | 2025-09-11 00:57:13.723523 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-09-11 00:57:13.723531 | orchestrator | Thursday 11 September 2025 00:55:31 +0000 (0:00:00.377) 0:00:27.263 **** 2025-09-11 00:57:13.723540 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-11 00:57:13.723548 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-11 00:57:13.723556 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-11 00:57:13.723568 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-11 00:57:13.723577 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-11 00:57:13.723585 | orchestrator | 2025-09-11 00:57:13.723593 | orchestrator | TASK [generate keys] *********************************************************** 2025-09-11 00:57:13.723600 | orchestrator | Thursday 11 September 2025 00:56:17 +0000 (0:00:45.471) 0:01:12.735 **** 2025-09-11 00:57:13.723608 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-11 00:57:13.723619 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-11 00:57:13.723627 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-11 00:57:13.723634 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-11 00:57:13.723642 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-11 00:57:13.723650 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-11 00:57:13.723658 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-09-11 00:57:13.723666 | orchestrator | 2025-09-11 00:57:13.723673 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-09-11 00:57:13.723681 | orchestrator | Thursday 11 September 2025 00:56:41 +0000 (0:00:24.386) 0:01:37.121 **** 2025-09-11 00:57:13.723689 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-11 00:57:13.723697 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-11 00:57:13.723704 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-11 00:57:13.723712 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-11 00:57:13.723720 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-11 00:57:13.723728 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-11 00:57:13.723736 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-11 00:57:13.723744 | orchestrator | 2025-09-11 00:57:13.723751 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-09-11 00:57:13.723759 | orchestrator | Thursday 11 September 2025 00:56:54 +0000 (0:00:12.950) 0:01:50.072 **** 2025-09-11 00:57:13.723767 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-11 00:57:13.723775 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-11 00:57:13.723782 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-11 00:57:13.723790 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-11 00:57:13.723798 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-11 00:57:13.723806 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-11 00:57:13.723817 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-11 00:57:13.723825 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-11 00:57:13.723833 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-11 00:57:13.723841 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-11 00:57:13.723854 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-11 00:57:13.723862 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-11 00:57:13.723869 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-11 00:57:13.723877 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-11 00:57:13.723885 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-11 00:57:13.723892 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-11 00:57:13.723900 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-11 00:57:13.723908 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-11 00:57:13.723916 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-09-11 00:57:13.723923 | orchestrator | 2025-09-11 00:57:13.723931 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:57:13.723939 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-09-11 00:57:13.723947 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-11 00:57:13.723956 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-11 00:57:13.723963 | orchestrator | 2025-09-11 00:57:13.723971 | orchestrator | 2025-09-11 00:57:13.723979 | orchestrator | 2025-09-11 00:57:13.723987 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:57:13.723995 | orchestrator | Thursday 11 September 2025 00:57:12 +0000 (0:00:17.660) 0:02:07.733 **** 2025-09-11 00:57:13.724002 | orchestrator | =============================================================================== 2025-09-11 00:57:13.724010 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.47s 2025-09-11 00:57:13.724018 | orchestrator | generate keys ---------------------------------------------------------- 24.39s 2025-09-11 00:57:13.724026 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.66s 2025-09-11 00:57:13.724034 | orchestrator | get keys from monitors ------------------------------------------------- 12.95s 2025-09-11 00:57:13.724042 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 1.98s 2025-09-11 00:57:13.724049 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.86s 2025-09-11 00:57:13.724057 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.77s 2025-09-11 00:57:13.724065 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.92s 2025-09-11 00:57:13.724072 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.80s 2025-09-11 00:57:13.724080 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.72s 2025-09-11 00:57:13.724088 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.71s 2025-09-11 00:57:13.724096 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.70s 2025-09-11 00:57:13.724103 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.65s 2025-09-11 00:57:13.724111 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.64s 2025-09-11 00:57:13.724130 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.63s 2025-09-11 00:57:13.724138 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.63s 2025-09-11 00:57:13.724167 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.60s 2025-09-11 00:57:13.724176 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.60s 2025-09-11 00:57:13.724189 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.58s 2025-09-11 00:57:13.724196 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.53s 2025-09-11 00:57:13.724204 | orchestrator | 2025-09-11 00:57:13 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:57:16.761421 | orchestrator | 2025-09-11 00:57:16 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:57:16.763682 | orchestrator | 2025-09-11 00:57:16 | INFO  | Task 91909f75-9667-481a-be23-b4cd6a469935 is in state STARTED 2025-09-11 00:57:16.765749 | orchestrator | 2025-09-11 00:57:16 | INFO  | Task 80d43de5-13f7-4a4c-ab5a-4beceecff3e9 is in state STARTED 2025-09-11 00:57:16.765969 | orchestrator | 2025-09-11 00:57:16 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:57:19.799076 | orchestrator | 2025-09-11 00:57:19 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:57:19.800313 | orchestrator | 2025-09-11 00:57:19 | INFO  | Task 91909f75-9667-481a-be23-b4cd6a469935 is in state STARTED 2025-09-11 00:57:19.801484 | orchestrator | 2025-09-11 00:57:19 | INFO  | Task 80d43de5-13f7-4a4c-ab5a-4beceecff3e9 is in state STARTED 2025-09-11 00:57:19.801645 | orchestrator | 2025-09-11 00:57:19 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:57:22.827759 | orchestrator | 2025-09-11 00:57:22 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:57:22.830155 | orchestrator | 2025-09-11 00:57:22 | INFO  | Task 91909f75-9667-481a-be23-b4cd6a469935 is in state STARTED 2025-09-11 00:57:22.832229 | orchestrator | 2025-09-11 00:57:22 | INFO  | Task 80d43de5-13f7-4a4c-ab5a-4beceecff3e9 is in state STARTED 2025-09-11 00:57:22.832349 | orchestrator | 2025-09-11 00:57:22 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:57:25.872372 | orchestrator | 2025-09-11 00:57:25 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:57:25.874155 | orchestrator | 2025-09-11 00:57:25 | INFO  | Task 91909f75-9667-481a-be23-b4cd6a469935 is in state STARTED 2025-09-11 00:57:25.875746 | orchestrator | 2025-09-11 00:57:25 | INFO  | Task 80d43de5-13f7-4a4c-ab5a-4beceecff3e9 is in state STARTED 2025-09-11 00:57:25.875990 | orchestrator | 2025-09-11 00:57:25 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:57:28.926968 | orchestrator | 2025-09-11 00:57:28 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:57:28.929256 | orchestrator | 2025-09-11 00:57:28 | INFO  | Task 91909f75-9667-481a-be23-b4cd6a469935 is in state STARTED 2025-09-11 00:57:28.931396 | orchestrator | 2025-09-11 00:57:28 | INFO  | Task 80d43de5-13f7-4a4c-ab5a-4beceecff3e9 is in state STARTED 2025-09-11 00:57:28.931704 | orchestrator | 2025-09-11 00:57:28 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:57:31.973316 | orchestrator | 2025-09-11 00:57:31 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:57:31.974745 | orchestrator | 2025-09-11 00:57:31 | INFO  | Task 91909f75-9667-481a-be23-b4cd6a469935 is in state STARTED 2025-09-11 00:57:31.976263 | orchestrator | 2025-09-11 00:57:31 | INFO  | Task 80d43de5-13f7-4a4c-ab5a-4beceecff3e9 is in state STARTED 2025-09-11 00:57:31.976287 | orchestrator | 2025-09-11 00:57:31 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:57:35.016011 | orchestrator | 2025-09-11 00:57:35 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:57:35.016910 | orchestrator | 2025-09-11 00:57:35 | INFO  | Task 91909f75-9667-481a-be23-b4cd6a469935 is in state STARTED 2025-09-11 00:57:35.017853 | orchestrator | 2025-09-11 00:57:35 | INFO  | Task 80d43de5-13f7-4a4c-ab5a-4beceecff3e9 is in state STARTED 2025-09-11 00:57:35.017891 | orchestrator | 2025-09-11 00:57:35 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:57:38.047887 | orchestrator | 2025-09-11 00:57:38 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:57:38.048105 | orchestrator | 2025-09-11 00:57:38 | INFO  | Task 91909f75-9667-481a-be23-b4cd6a469935 is in state STARTED 2025-09-11 00:57:38.049391 | orchestrator | 2025-09-11 00:57:38 | INFO  | Task 80d43de5-13f7-4a4c-ab5a-4beceecff3e9 is in state STARTED 2025-09-11 00:57:38.049415 | orchestrator | 2025-09-11 00:57:38 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:57:41.079634 | orchestrator | 2025-09-11 00:57:41 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:57:41.081713 | orchestrator | 2025-09-11 00:57:41 | INFO  | Task 91909f75-9667-481a-be23-b4cd6a469935 is in state STARTED 2025-09-11 00:57:41.083427 | orchestrator | 2025-09-11 00:57:41 | INFO  | Task 84889354-bbc7-46ea-b74e-eda18423bd16 is in state STARTED 2025-09-11 00:57:41.084705 | orchestrator | 2025-09-11 00:57:41 | INFO  | Task 80d43de5-13f7-4a4c-ab5a-4beceecff3e9 is in state SUCCESS 2025-09-11 00:57:41.084727 | orchestrator | 2025-09-11 00:57:41 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:57:44.117653 | orchestrator | 2025-09-11 00:57:44 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:57:44.120898 | orchestrator | 2025-09-11 00:57:44 | INFO  | Task 91909f75-9667-481a-be23-b4cd6a469935 is in state SUCCESS 2025-09-11 00:57:44.122215 | orchestrator | 2025-09-11 00:57:44.122335 | orchestrator | 2025-09-11 00:57:44.122349 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-09-11 00:57:44.122360 | orchestrator | 2025-09-11 00:57:44.122371 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-09-11 00:57:44.122383 | orchestrator | Thursday 11 September 2025 00:57:15 +0000 (0:00:00.146) 0:00:00.146 **** 2025-09-11 00:57:44.122393 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-09-11 00:57:44.122405 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-11 00:57:44.122416 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-11 00:57:44.122427 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-09-11 00:57:44.122437 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-11 00:57:44.122448 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-09-11 00:57:44.122458 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-09-11 00:57:44.122469 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-09-11 00:57:44.122480 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-09-11 00:57:44.122490 | orchestrator | 2025-09-11 00:57:44.122501 | orchestrator | TASK [Create share directory] ************************************************** 2025-09-11 00:57:44.122512 | orchestrator | Thursday 11 September 2025 00:57:20 +0000 (0:00:04.212) 0:00:04.359 **** 2025-09-11 00:57:44.122523 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-11 00:57:44.122534 | orchestrator | 2025-09-11 00:57:44.122585 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-09-11 00:57:44.122597 | orchestrator | Thursday 11 September 2025 00:57:20 +0000 (0:00:00.891) 0:00:05.250 **** 2025-09-11 00:57:44.122630 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-09-11 00:57:44.122642 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-11 00:57:44.122653 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-11 00:57:44.122663 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-09-11 00:57:44.122674 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-11 00:57:44.122685 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-09-11 00:57:44.122696 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-09-11 00:57:44.122706 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-09-11 00:57:44.122717 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-09-11 00:57:44.122728 | orchestrator | 2025-09-11 00:57:44.122750 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-09-11 00:57:44.122761 | orchestrator | Thursday 11 September 2025 00:57:32 +0000 (0:00:11.329) 0:00:16.580 **** 2025-09-11 00:57:44.122772 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-09-11 00:57:44.122783 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-11 00:57:44.122793 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-11 00:57:44.122804 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-09-11 00:57:44.122815 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-11 00:57:44.122825 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-09-11 00:57:44.122836 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-09-11 00:57:44.122847 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-09-11 00:57:44.122857 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-09-11 00:57:44.122868 | orchestrator | 2025-09-11 00:57:44.122879 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:57:44.122890 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:57:44.122901 | orchestrator | 2025-09-11 00:57:44.122912 | orchestrator | 2025-09-11 00:57:44.122923 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:57:44.122934 | orchestrator | Thursday 11 September 2025 00:57:37 +0000 (0:00:05.714) 0:00:22.295 **** 2025-09-11 00:57:44.122944 | orchestrator | =============================================================================== 2025-09-11 00:57:44.122955 | orchestrator | Write ceph keys to the share directory --------------------------------- 11.33s 2025-09-11 00:57:44.122968 | orchestrator | Write ceph keys to the configuration directory -------------------------- 5.71s 2025-09-11 00:57:44.122980 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.21s 2025-09-11 00:57:44.122993 | orchestrator | Create share directory -------------------------------------------------- 0.89s 2025-09-11 00:57:44.123006 | orchestrator | 2025-09-11 00:57:44.123018 | orchestrator | 2025-09-11 00:57:44.123030 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-11 00:57:44.123042 | orchestrator | 2025-09-11 00:57:44.123067 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-11 00:57:44.123079 | orchestrator | Thursday 11 September 2025 00:56:06 +0000 (0:00:00.259) 0:00:00.259 **** 2025-09-11 00:57:44.123091 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:57:44.123104 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:57:44.123116 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:57:44.123147 | orchestrator | 2025-09-11 00:57:44.123159 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-11 00:57:44.123179 | orchestrator | Thursday 11 September 2025 00:56:06 +0000 (0:00:00.326) 0:00:00.586 **** 2025-09-11 00:57:44.123192 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-09-11 00:57:44.123206 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-09-11 00:57:44.123218 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-09-11 00:57:44.123230 | orchestrator | 2025-09-11 00:57:44.123242 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-09-11 00:57:44.123255 | orchestrator | 2025-09-11 00:57:44.123268 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-11 00:57:44.123280 | orchestrator | Thursday 11 September 2025 00:56:06 +0000 (0:00:00.469) 0:00:01.055 **** 2025-09-11 00:57:44.123293 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:57:44.123305 | orchestrator | 2025-09-11 00:57:44.123318 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-09-11 00:57:44.123330 | orchestrator | Thursday 11 September 2025 00:56:07 +0000 (0:00:00.475) 0:00:01.531 **** 2025-09-11 00:57:44.123354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-11 00:57:44.123382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-11 00:57:44.123408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-11 00:57:44.123422 | orchestrator | 2025-09-11 00:57:44.123433 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-09-11 00:57:44.123444 | orchestrator | Thursday 11 September 2025 00:56:08 +0000 (0:00:01.140) 0:00:02.671 **** 2025-09-11 00:57:44.123455 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:57:44.123466 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:57:44.123476 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:57:44.123487 | orchestrator | 2025-09-11 00:57:44.123504 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-11 00:57:44.123515 | orchestrator | Thursday 11 September 2025 00:56:08 +0000 (0:00:00.461) 0:00:03.132 **** 2025-09-11 00:57:44.123525 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-11 00:57:44.123536 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-11 00:57:44.123552 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-09-11 00:57:44.123564 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-09-11 00:57:44.123574 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-09-11 00:57:44.123585 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-09-11 00:57:44.123596 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-09-11 00:57:44.123607 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-09-11 00:57:44.123617 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-11 00:57:44.123628 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-11 00:57:44.123639 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-09-11 00:57:44.123649 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-09-11 00:57:44.123660 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-09-11 00:57:44.123671 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-09-11 00:57:44.123681 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-09-11 00:57:44.123692 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-09-11 00:57:44.123703 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-11 00:57:44.123713 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-11 00:57:44.123724 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-09-11 00:57:44.123735 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-09-11 00:57:44.123745 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-09-11 00:57:44.123756 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-09-11 00:57:44.123766 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-09-11 00:57:44.123777 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-09-11 00:57:44.123789 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-09-11 00:57:44.123800 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-09-11 00:57:44.123811 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-09-11 00:57:44.123827 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-09-11 00:57:44.123838 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-09-11 00:57:44.123849 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-09-11 00:57:44.123866 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-09-11 00:57:44.123876 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-09-11 00:57:44.123887 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-09-11 00:57:44.123897 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-09-11 00:57:44.123908 | orchestrator | 2025-09-11 00:57:44.123919 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-11 00:57:44.123930 | orchestrator | Thursday 11 September 2025 00:56:09 +0000 (0:00:00.691) 0:00:03.824 **** 2025-09-11 00:57:44.123940 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:57:44.123951 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:57:44.123962 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:57:44.123972 | orchestrator | 2025-09-11 00:57:44.123983 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-11 00:57:44.123994 | orchestrator | Thursday 11 September 2025 00:56:09 +0000 (0:00:00.296) 0:00:04.121 **** 2025-09-11 00:57:44.124005 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:57:44.124015 | orchestrator | 2025-09-11 00:57:44.124026 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-11 00:57:44.124041 | orchestrator | Thursday 11 September 2025 00:56:10 +0000 (0:00:00.116) 0:00:04.238 **** 2025-09-11 00:57:44.124052 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:57:44.124063 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:57:44.124074 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:57:44.124084 | orchestrator | 2025-09-11 00:57:44.124095 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-11 00:57:44.124106 | orchestrator | Thursday 11 September 2025 00:56:10 +0000 (0:00:00.425) 0:00:04.664 **** 2025-09-11 00:57:44.124117 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:57:44.124142 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:57:44.124153 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:57:44.124164 | orchestrator | 2025-09-11 00:57:44.124175 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-11 00:57:44.124185 | orchestrator | Thursday 11 September 2025 00:56:10 +0000 (0:00:00.288) 0:00:04.952 **** 2025-09-11 00:57:44.124196 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:57:44.124206 | orchestrator | 2025-09-11 00:57:44.124217 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-11 00:57:44.124228 | orchestrator | Thursday 11 September 2025 00:56:10 +0000 (0:00:00.134) 0:00:05.087 **** 2025-09-11 00:57:44.124238 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:57:44.124249 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:57:44.124260 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:57:44.124270 | orchestrator | 2025-09-11 00:57:44.124281 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-11 00:57:44.124292 | orchestrator | Thursday 11 September 2025 00:56:11 +0000 (0:00:00.279) 0:00:05.366 **** 2025-09-11 00:57:44.124303 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:57:44.124313 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:57:44.124324 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:57:44.124334 | orchestrator | 2025-09-11 00:57:44.124345 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-11 00:57:44.124356 | orchestrator | Thursday 11 September 2025 00:56:11 +0000 (0:00:00.249) 0:00:05.615 **** 2025-09-11 00:57:44.124366 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:57:44.124377 | orchestrator | 2025-09-11 00:57:44.124388 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-11 00:57:44.124404 | orchestrator | Thursday 11 September 2025 00:56:11 +0000 (0:00:00.120) 0:00:05.735 **** 2025-09-11 00:57:44.124415 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:57:44.124426 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:57:44.124437 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:57:44.124447 | orchestrator | 2025-09-11 00:57:44.124458 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-11 00:57:44.124469 | orchestrator | Thursday 11 September 2025 00:56:11 +0000 (0:00:00.364) 0:00:06.099 **** 2025-09-11 00:57:44.124480 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:57:44.124491 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:57:44.124502 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:57:44.124512 | orchestrator | 2025-09-11 00:57:44.124523 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-11 00:57:44.124534 | orchestrator | Thursday 11 September 2025 00:56:12 +0000 (0:00:00.261) 0:00:06.361 **** 2025-09-11 00:57:44.124544 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:57:44.124555 | orchestrator | 2025-09-11 00:57:44.124566 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-11 00:57:44.124577 | orchestrator | Thursday 11 September 2025 00:56:12 +0000 (0:00:00.115) 0:00:06.476 **** 2025-09-11 00:57:44.124587 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:57:44.124598 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:57:44.124609 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:57:44.124619 | orchestrator | 2025-09-11 00:57:44.124635 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-11 00:57:44.124646 | orchestrator | Thursday 11 September 2025 00:56:12 +0000 (0:00:00.263) 0:00:06.740 **** 2025-09-11 00:57:44.124657 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:57:44.124667 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:57:44.124678 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:57:44.124689 | orchestrator | 2025-09-11 00:57:44.124699 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-11 00:57:44.124710 | orchestrator | Thursday 11 September 2025 00:56:12 +0000 (0:00:00.281) 0:00:07.021 **** 2025-09-11 00:57:44.124721 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:57:44.124732 | orchestrator | 2025-09-11 00:57:44.124742 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-11 00:57:44.124753 | orchestrator | Thursday 11 September 2025 00:56:13 +0000 (0:00:00.208) 0:00:07.230 **** 2025-09-11 00:57:44.124764 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:57:44.124775 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:57:44.124786 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:57:44.124796 | orchestrator | 2025-09-11 00:57:44.124807 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-11 00:57:44.124818 | orchestrator | Thursday 11 September 2025 00:56:13 +0000 (0:00:00.263) 0:00:07.494 **** 2025-09-11 00:57:44.124828 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:57:44.124839 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:57:44.124850 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:57:44.124860 | orchestrator | 2025-09-11 00:57:44.124871 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-11 00:57:44.124882 | orchestrator | Thursday 11 September 2025 00:56:13 +0000 (0:00:00.266) 0:00:07.760 **** 2025-09-11 00:57:44.124892 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:57:44.124903 | orchestrator | 2025-09-11 00:57:44.124914 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-11 00:57:44.124924 | orchestrator | Thursday 11 September 2025 00:56:13 +0000 (0:00:00.121) 0:00:07.881 **** 2025-09-11 00:57:44.124935 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:57:44.124946 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:57:44.124956 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:57:44.124967 | orchestrator | 2025-09-11 00:57:44.124978 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-11 00:57:44.125001 | orchestrator | Thursday 11 September 2025 00:56:13 +0000 (0:00:00.270) 0:00:08.152 **** 2025-09-11 00:57:44.125012 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:57:44.125023 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:57:44.125033 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:57:44.125044 | orchestrator | 2025-09-11 00:57:44.125060 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-11 00:57:44.125071 | orchestrator | Thursday 11 September 2025 00:56:14 +0000 (0:00:00.366) 0:00:08.519 **** 2025-09-11 00:57:44.125082 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:57:44.125093 | orchestrator | 2025-09-11 00:57:44.125104 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-11 00:57:44.125114 | orchestrator | Thursday 11 September 2025 00:56:14 +0000 (0:00:00.117) 0:00:08.636 **** 2025-09-11 00:57:44.125177 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:57:44.125189 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:57:44.125200 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:57:44.125210 | orchestrator | 2025-09-11 00:57:44.125221 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-11 00:57:44.125232 | orchestrator | Thursday 11 September 2025 00:56:14 +0000 (0:00:00.274) 0:00:08.911 **** 2025-09-11 00:57:44.125243 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:57:44.125253 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:57:44.125264 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:57:44.125275 | orchestrator | 2025-09-11 00:57:44.125285 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-11 00:57:44.125296 | orchestrator | Thursday 11 September 2025 00:56:15 +0000 (0:00:00.278) 0:00:09.190 **** 2025-09-11 00:57:44.125307 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:57:44.125317 | orchestrator | 2025-09-11 00:57:44.125328 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-11 00:57:44.125338 | orchestrator | Thursday 11 September 2025 00:56:15 +0000 (0:00:00.105) 0:00:09.295 **** 2025-09-11 00:57:44.125349 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:57:44.125360 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:57:44.125370 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:57:44.125381 | orchestrator | 2025-09-11 00:57:44.125392 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-11 00:57:44.125402 | orchestrator | Thursday 11 September 2025 00:56:15 +0000 (0:00:00.246) 0:00:09.542 **** 2025-09-11 00:57:44.125413 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:57:44.125424 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:57:44.125434 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:57:44.125445 | orchestrator | 2025-09-11 00:57:44.125455 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-11 00:57:44.125466 | orchestrator | Thursday 11 September 2025 00:56:15 +0000 (0:00:00.405) 0:00:09.947 **** 2025-09-11 00:57:44.125477 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:57:44.125487 | orchestrator | 2025-09-11 00:57:44.125498 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-11 00:57:44.125509 | orchestrator | Thursday 11 September 2025 00:56:15 +0000 (0:00:00.107) 0:00:10.054 **** 2025-09-11 00:57:44.125519 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:57:44.125530 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:57:44.125540 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:57:44.125549 | orchestrator | 2025-09-11 00:57:44.125559 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-11 00:57:44.125569 | orchestrator | Thursday 11 September 2025 00:56:16 +0000 (0:00:00.242) 0:00:10.297 **** 2025-09-11 00:57:44.125578 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:57:44.125588 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:57:44.125597 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:57:44.125606 | orchestrator | 2025-09-11 00:57:44.125616 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-11 00:57:44.125625 | orchestrator | Thursday 11 September 2025 00:56:16 +0000 (0:00:00.264) 0:00:10.561 **** 2025-09-11 00:57:44.125645 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:57:44.125655 | orchestrator | 2025-09-11 00:57:44.125665 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-11 00:57:44.125674 | orchestrator | Thursday 11 September 2025 00:56:16 +0000 (0:00:00.094) 0:00:10.655 **** 2025-09-11 00:57:44.125684 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:57:44.125693 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:57:44.125703 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:57:44.125712 | orchestrator | 2025-09-11 00:57:44.125721 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-09-11 00:57:44.125731 | orchestrator | Thursday 11 September 2025 00:56:16 +0000 (0:00:00.362) 0:00:11.018 **** 2025-09-11 00:57:44.125740 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:57:44.125750 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:57:44.125759 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:57:44.125769 | orchestrator | 2025-09-11 00:57:44.125778 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-09-11 00:57:44.125788 | orchestrator | Thursday 11 September 2025 00:56:18 +0000 (0:00:01.560) 0:00:12.579 **** 2025-09-11 00:57:44.125797 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-11 00:57:44.125807 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-11 00:57:44.125816 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-11 00:57:44.125825 | orchestrator | 2025-09-11 00:57:44.125835 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-09-11 00:57:44.125844 | orchestrator | Thursday 11 September 2025 00:56:19 +0000 (0:00:01.565) 0:00:14.144 **** 2025-09-11 00:57:44.125854 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-11 00:57:44.125864 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-11 00:57:44.125873 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-11 00:57:44.125883 | orchestrator | 2025-09-11 00:57:44.125892 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-09-11 00:57:44.125902 | orchestrator | Thursday 11 September 2025 00:56:22 +0000 (0:00:02.349) 0:00:16.494 **** 2025-09-11 00:57:44.125917 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-11 00:57:44.125927 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-11 00:57:44.125936 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-11 00:57:44.125946 | orchestrator | 2025-09-11 00:57:44.125955 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-09-11 00:57:44.125965 | orchestrator | Thursday 11 September 2025 00:56:24 +0000 (0:00:01.971) 0:00:18.466 **** 2025-09-11 00:57:44.125974 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:57:44.125984 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:57:44.125993 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:57:44.126003 | orchestrator | 2025-09-11 00:57:44.126012 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-09-11 00:57:44.126063 | orchestrator | Thursday 11 September 2025 00:56:24 +0000 (0:00:00.280) 0:00:18.747 **** 2025-09-11 00:57:44.126073 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:57:44.126083 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:57:44.126092 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:57:44.126102 | orchestrator | 2025-09-11 00:57:44.126111 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-11 00:57:44.126136 | orchestrator | Thursday 11 September 2025 00:56:24 +0000 (0:00:00.302) 0:00:19.050 **** 2025-09-11 00:57:44.126153 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:57:44.126162 | orchestrator | 2025-09-11 00:57:44.126172 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-09-11 00:57:44.126182 | orchestrator | Thursday 11 September 2025 00:56:25 +0000 (0:00:00.565) 0:00:19.615 **** 2025-09-11 00:57:44.126198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-11 00:57:44.126218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-11 00:57:44.126241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-11 00:57:44.126252 | orchestrator | 2025-09-11 00:57:44.126262 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-09-11 00:57:44.126271 | orchestrator | Thursday 11 September 2025 00:56:27 +0000 (0:00:01.675) 0:00:21.291 **** 2025-09-11 00:57:44.126509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-11 00:57:44.126535 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:57:44.126552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-11 00:57:44.126569 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:57:44.126580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-11 00:57:44.126596 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:57:44.126606 | orchestrator | 2025-09-11 00:57:44.126615 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-09-11 00:57:44.126625 | orchestrator | Thursday 11 September 2025 00:56:27 +0000 (0:00:00.701) 0:00:21.993 **** 2025-09-11 00:57:44.126645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-11 00:57:44.126661 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:57:44.126677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-11 00:57:44.126688 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:57:44.126705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-11 00:57:44.126721 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:57:44.126731 | orchestrator | 2025-09-11 00:57:44.126740 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-09-11 00:57:44.126750 | orchestrator | Thursday 11 September 2025 00:56:28 +0000 (0:00:00.789) 0:00:22.782 **** 2025-09-11 00:57:44.126765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-11 00:57:44.126783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-11 00:57:44.126808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-11 00:57:44.126819 | orchestrator | 2025-09-11 00:57:44.126829 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-11 00:57:44.126839 | orchestrator | Thursday 11 September 2025 00:56:30 +0000 (0:00:01.483) 0:00:24.265 **** 2025-09-11 00:57:44.126849 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:57:44.126859 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:57:44.126868 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:57:44.126877 | orchestrator | 2025-09-11 00:57:44.126887 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-11 00:57:44.126897 | orchestrator | Thursday 11 September 2025 00:56:30 +0000 (0:00:00.286) 0:00:24.552 **** 2025-09-11 00:57:44.126906 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:57:44.126916 | orchestrator | 2025-09-11 00:57:44.126926 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-09-11 00:57:44.126940 | orchestrator | Thursday 11 September 2025 00:56:30 +0000 (0:00:00.498) 0:00:25.050 **** 2025-09-11 00:57:44.126950 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:57:44.126960 | orchestrator | 2025-09-11 00:57:44.126974 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-09-11 00:57:44.126983 | orchestrator | Thursday 11 September 2025 00:56:33 +0000 (0:00:02.196) 0:00:27.247 **** 2025-09-11 00:57:44.126993 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:57:44.127003 | orchestrator | 2025-09-11 00:57:44.127012 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-09-11 00:57:44.127022 | orchestrator | Thursday 11 September 2025 00:56:35 +0000 (0:00:02.558) 0:00:29.806 **** 2025-09-11 00:57:44.127031 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:57:44.127041 | orchestrator | 2025-09-11 00:57:44.127050 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-11 00:57:44.127060 | orchestrator | Thursday 11 September 2025 00:56:51 +0000 (0:00:16.025) 0:00:45.832 **** 2025-09-11 00:57:44.127069 | orchestrator | 2025-09-11 00:57:44.127079 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-11 00:57:44.127089 | orchestrator | Thursday 11 September 2025 00:56:51 +0000 (0:00:00.063) 0:00:45.895 **** 2025-09-11 00:57:44.127098 | orchestrator | 2025-09-11 00:57:44.127108 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-11 00:57:44.127117 | orchestrator | Thursday 11 September 2025 00:56:51 +0000 (0:00:00.075) 0:00:45.971 **** 2025-09-11 00:57:44.127143 | orchestrator | 2025-09-11 00:57:44.127153 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-09-11 00:57:44.127162 | orchestrator | Thursday 11 September 2025 00:56:51 +0000 (0:00:00.067) 0:00:46.039 **** 2025-09-11 00:57:44.127172 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:57:44.127181 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:57:44.127191 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:57:44.127200 | orchestrator | 2025-09-11 00:57:44.127210 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:57:44.127220 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-09-11 00:57:44.127230 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-11 00:57:44.127239 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-11 00:57:44.127249 | orchestrator | 2025-09-11 00:57:44.127258 | orchestrator | 2025-09-11 00:57:44.127268 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:57:44.127277 | orchestrator | Thursday 11 September 2025 00:57:42 +0000 (0:00:50.625) 0:01:36.665 **** 2025-09-11 00:57:44.127287 | orchestrator | =============================================================================== 2025-09-11 00:57:44.127296 | orchestrator | horizon : Restart horizon container ------------------------------------ 50.63s 2025-09-11 00:57:44.127306 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.03s 2025-09-11 00:57:44.127315 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.56s 2025-09-11 00:57:44.127324 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.35s 2025-09-11 00:57:44.127334 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.20s 2025-09-11 00:57:44.127347 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.97s 2025-09-11 00:57:44.127357 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.68s 2025-09-11 00:57:44.127367 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.57s 2025-09-11 00:57:44.127376 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.56s 2025-09-11 00:57:44.127391 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.48s 2025-09-11 00:57:44.127401 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.14s 2025-09-11 00:57:44.127410 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.79s 2025-09-11 00:57:44.127419 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.70s 2025-09-11 00:57:44.127429 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.69s 2025-09-11 00:57:44.127438 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.57s 2025-09-11 00:57:44.127448 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.50s 2025-09-11 00:57:44.127457 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.48s 2025-09-11 00:57:44.127467 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.47s 2025-09-11 00:57:44.127476 | orchestrator | horizon : Set empty custom policy --------------------------------------- 0.46s 2025-09-11 00:57:44.127486 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.43s 2025-09-11 00:57:44.127495 | orchestrator | 2025-09-11 00:57:44 | INFO  | Task 84889354-bbc7-46ea-b74e-eda18423bd16 is in state STARTED 2025-09-11 00:57:44.127505 | orchestrator | 2025-09-11 00:57:44 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:57:47.158384 | orchestrator | 2025-09-11 00:57:47 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:57:47.162261 | orchestrator | 2025-09-11 00:57:47 | INFO  | Task 84889354-bbc7-46ea-b74e-eda18423bd16 is in state STARTED 2025-09-11 00:57:47.162333 | orchestrator | 2025-09-11 00:57:47 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:57:50.196441 | orchestrator | 2025-09-11 00:57:50 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:57:50.197252 | orchestrator | 2025-09-11 00:57:50 | INFO  | Task 84889354-bbc7-46ea-b74e-eda18423bd16 is in state STARTED 2025-09-11 00:57:50.197288 | orchestrator | 2025-09-11 00:57:50 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:57:53.235549 | orchestrator | 2025-09-11 00:57:53 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:57:53.238072 | orchestrator | 2025-09-11 00:57:53 | INFO  | Task 84889354-bbc7-46ea-b74e-eda18423bd16 is in state STARTED 2025-09-11 00:57:53.238116 | orchestrator | 2025-09-11 00:57:53 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:57:56.274115 | orchestrator | 2025-09-11 00:57:56 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:57:56.274231 | orchestrator | 2025-09-11 00:57:56 | INFO  | Task 84889354-bbc7-46ea-b74e-eda18423bd16 is in state STARTED 2025-09-11 00:57:56.274246 | orchestrator | 2025-09-11 00:57:56 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:57:59.310479 | orchestrator | 2025-09-11 00:57:59 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:57:59.311368 | orchestrator | 2025-09-11 00:57:59 | INFO  | Task 84889354-bbc7-46ea-b74e-eda18423bd16 is in state STARTED 2025-09-11 00:57:59.311406 | orchestrator | 2025-09-11 00:57:59 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:58:02.351619 | orchestrator | 2025-09-11 00:58:02 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:58:02.354419 | orchestrator | 2025-09-11 00:58:02 | INFO  | Task 84889354-bbc7-46ea-b74e-eda18423bd16 is in state STARTED 2025-09-11 00:58:02.354508 | orchestrator | 2025-09-11 00:58:02 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:58:05.397156 | orchestrator | 2025-09-11 00:58:05 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:58:05.398643 | orchestrator | 2025-09-11 00:58:05 | INFO  | Task 84889354-bbc7-46ea-b74e-eda18423bd16 is in state STARTED 2025-09-11 00:58:05.398904 | orchestrator | 2025-09-11 00:58:05 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:58:08.438302 | orchestrator | 2025-09-11 00:58:08 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:58:08.439802 | orchestrator | 2025-09-11 00:58:08 | INFO  | Task 84889354-bbc7-46ea-b74e-eda18423bd16 is in state STARTED 2025-09-11 00:58:08.440044 | orchestrator | 2025-09-11 00:58:08 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:58:11.476949 | orchestrator | 2025-09-11 00:58:11 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:58:11.479763 | orchestrator | 2025-09-11 00:58:11 | INFO  | Task 84889354-bbc7-46ea-b74e-eda18423bd16 is in state STARTED 2025-09-11 00:58:11.479803 | orchestrator | 2025-09-11 00:58:11 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:58:14.522163 | orchestrator | 2025-09-11 00:58:14 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:58:14.523622 | orchestrator | 2025-09-11 00:58:14 | INFO  | Task 84889354-bbc7-46ea-b74e-eda18423bd16 is in state STARTED 2025-09-11 00:58:14.523820 | orchestrator | 2025-09-11 00:58:14 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:58:17.556310 | orchestrator | 2025-09-11 00:58:17 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:58:17.558712 | orchestrator | 2025-09-11 00:58:17 | INFO  | Task 84889354-bbc7-46ea-b74e-eda18423bd16 is in state STARTED 2025-09-11 00:58:17.558760 | orchestrator | 2025-09-11 00:58:17 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:58:20.603894 | orchestrator | 2025-09-11 00:58:20 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:58:20.605315 | orchestrator | 2025-09-11 00:58:20 | INFO  | Task 84889354-bbc7-46ea-b74e-eda18423bd16 is in state STARTED 2025-09-11 00:58:20.605348 | orchestrator | 2025-09-11 00:58:20 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:58:23.645600 | orchestrator | 2025-09-11 00:58:23 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:58:23.647068 | orchestrator | 2025-09-11 00:58:23 | INFO  | Task 84889354-bbc7-46ea-b74e-eda18423bd16 is in state STARTED 2025-09-11 00:58:23.647099 | orchestrator | 2025-09-11 00:58:23 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:58:26.688000 | orchestrator | 2025-09-11 00:58:26 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:58:26.688764 | orchestrator | 2025-09-11 00:58:26 | INFO  | Task 84889354-bbc7-46ea-b74e-eda18423bd16 is in state STARTED 2025-09-11 00:58:26.688797 | orchestrator | 2025-09-11 00:58:26 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:58:29.728651 | orchestrator | 2025-09-11 00:58:29 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:58:29.729523 | orchestrator | 2025-09-11 00:58:29 | INFO  | Task 84889354-bbc7-46ea-b74e-eda18423bd16 is in state STARTED 2025-09-11 00:58:29.729556 | orchestrator | 2025-09-11 00:58:29 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:58:32.792167 | orchestrator | 2025-09-11 00:58:32 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:58:32.793623 | orchestrator | 2025-09-11 00:58:32 | INFO  | Task b2196fb8-c77d-4d7e-a6da-da6eae67b31f is in state STARTED 2025-09-11 00:58:32.797665 | orchestrator | 2025-09-11 00:58:32 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 00:58:32.799274 | orchestrator | 2025-09-11 00:58:32 | INFO  | Task 84889354-bbc7-46ea-b74e-eda18423bd16 is in state SUCCESS 2025-09-11 00:58:32.800232 | orchestrator | 2025-09-11 00:58:32 | INFO  | Task 1428679f-9ece-42d3-89a2-57aa0b60b0c4 is in state STARTED 2025-09-11 00:58:32.800257 | orchestrator | 2025-09-11 00:58:32 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:58:35.833728 | orchestrator | 2025-09-11 00:58:35 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:58:35.835610 | orchestrator | 2025-09-11 00:58:35 | INFO  | Task b2196fb8-c77d-4d7e-a6da-da6eae67b31f is in state SUCCESS 2025-09-11 00:58:35.838659 | orchestrator | 2025-09-11 00:58:35 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 00:58:35.838810 | orchestrator | 2025-09-11 00:58:35 | INFO  | Task 1428679f-9ece-42d3-89a2-57aa0b60b0c4 is in state STARTED 2025-09-11 00:58:35.839225 | orchestrator | 2025-09-11 00:58:35 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:58:38.888348 | orchestrator | 2025-09-11 00:58:38 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 00:58:38.888448 | orchestrator | 2025-09-11 00:58:38 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:58:38.889092 | orchestrator | 2025-09-11 00:58:38 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 00:58:38.892006 | orchestrator | 2025-09-11 00:58:38 | INFO  | Task 1da95160-6498-4909-85a7-3aae86a782ca is in state STARTED 2025-09-11 00:58:38.892056 | orchestrator | 2025-09-11 00:58:38 | INFO  | Task 1428679f-9ece-42d3-89a2-57aa0b60b0c4 is in state STARTED 2025-09-11 00:58:38.892068 | orchestrator | 2025-09-11 00:58:38 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:58:41.999909 | orchestrator | 2025-09-11 00:58:41 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 00:58:42.000007 | orchestrator | 2025-09-11 00:58:41 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:58:42.000022 | orchestrator | 2025-09-11 00:58:41 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 00:58:42.000034 | orchestrator | 2025-09-11 00:58:41 | INFO  | Task 1da95160-6498-4909-85a7-3aae86a782ca is in state STARTED 2025-09-11 00:58:42.000045 | orchestrator | 2025-09-11 00:58:41 | INFO  | Task 1428679f-9ece-42d3-89a2-57aa0b60b0c4 is in state STARTED 2025-09-11 00:58:42.000056 | orchestrator | 2025-09-11 00:58:41 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:58:45.014947 | orchestrator | 2025-09-11 00:58:45 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 00:58:45.016468 | orchestrator | 2025-09-11 00:58:45 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state STARTED 2025-09-11 00:58:45.019536 | orchestrator | 2025-09-11 00:58:45 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 00:58:45.023829 | orchestrator | 2025-09-11 00:58:45 | INFO  | Task 1da95160-6498-4909-85a7-3aae86a782ca is in state STARTED 2025-09-11 00:58:45.025758 | orchestrator | 2025-09-11 00:58:45 | INFO  | Task 1428679f-9ece-42d3-89a2-57aa0b60b0c4 is in state STARTED 2025-09-11 00:58:45.025820 | orchestrator | 2025-09-11 00:58:45 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:58:48.070447 | orchestrator | 2025-09-11 00:58:48 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 00:58:48.073310 | orchestrator | 2025-09-11 00:58:48 | INFO  | Task cead2ac1-e422-4f34-8a69-a0e0e6ff1b13 is in state SUCCESS 2025-09-11 00:58:48.075854 | orchestrator | 2025-09-11 00:58:48.075894 | orchestrator | 2025-09-11 00:58:48.075907 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-09-11 00:58:48.075919 | orchestrator | 2025-09-11 00:58:48.075930 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-09-11 00:58:48.075941 | orchestrator | Thursday 11 September 2025 00:57:41 +0000 (0:00:00.213) 0:00:00.213 **** 2025-09-11 00:58:48.075953 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-09-11 00:58:48.075965 | orchestrator | 2025-09-11 00:58:48.075976 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-09-11 00:58:48.075986 | orchestrator | Thursday 11 September 2025 00:57:42 +0000 (0:00:00.200) 0:00:00.413 **** 2025-09-11 00:58:48.075998 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-09-11 00:58:48.076008 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-09-11 00:58:48.076019 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-09-11 00:58:48.076031 | orchestrator | 2025-09-11 00:58:48.076053 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-09-11 00:58:48.076084 | orchestrator | Thursday 11 September 2025 00:57:43 +0000 (0:00:01.061) 0:00:01.475 **** 2025-09-11 00:58:48.076096 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-09-11 00:58:48.076106 | orchestrator | 2025-09-11 00:58:48.076141 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-09-11 00:58:48.076152 | orchestrator | Thursday 11 September 2025 00:57:44 +0000 (0:00:01.038) 0:00:02.514 **** 2025-09-11 00:58:48.076163 | orchestrator | changed: [testbed-manager] 2025-09-11 00:58:48.076174 | orchestrator | 2025-09-11 00:58:48.076185 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-09-11 00:58:48.076196 | orchestrator | Thursday 11 September 2025 00:57:44 +0000 (0:00:00.851) 0:00:03.365 **** 2025-09-11 00:58:48.076207 | orchestrator | changed: [testbed-manager] 2025-09-11 00:58:48.076218 | orchestrator | 2025-09-11 00:58:48.076228 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-09-11 00:58:48.076239 | orchestrator | Thursday 11 September 2025 00:57:45 +0000 (0:00:00.758) 0:00:04.123 **** 2025-09-11 00:58:48.076249 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-09-11 00:58:48.076260 | orchestrator | ok: [testbed-manager] 2025-09-11 00:58:48.076271 | orchestrator | 2025-09-11 00:58:48.076282 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-09-11 00:58:48.076292 | orchestrator | Thursday 11 September 2025 00:58:19 +0000 (0:00:34.198) 0:00:38.322 **** 2025-09-11 00:58:48.076303 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-09-11 00:58:48.076314 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-09-11 00:58:48.076325 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-09-11 00:58:48.076336 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-09-11 00:58:48.076347 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-09-11 00:58:48.076357 | orchestrator | 2025-09-11 00:58:48.076368 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-09-11 00:58:48.076386 | orchestrator | Thursday 11 September 2025 00:58:23 +0000 (0:00:03.487) 0:00:41.809 **** 2025-09-11 00:58:48.076397 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-09-11 00:58:48.076410 | orchestrator | 2025-09-11 00:58:48.076422 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-09-11 00:58:48.076435 | orchestrator | Thursday 11 September 2025 00:58:23 +0000 (0:00:00.392) 0:00:42.202 **** 2025-09-11 00:58:48.076447 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:58:48.076459 | orchestrator | 2025-09-11 00:58:48.076487 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-09-11 00:58:48.076500 | orchestrator | Thursday 11 September 2025 00:58:23 +0000 (0:00:00.125) 0:00:42.328 **** 2025-09-11 00:58:48.076512 | orchestrator | skipping: [testbed-manager] 2025-09-11 00:58:48.076525 | orchestrator | 2025-09-11 00:58:48.076536 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-09-11 00:58:48.076549 | orchestrator | Thursday 11 September 2025 00:58:24 +0000 (0:00:00.289) 0:00:42.617 **** 2025-09-11 00:58:48.076561 | orchestrator | changed: [testbed-manager] 2025-09-11 00:58:48.076573 | orchestrator | 2025-09-11 00:58:48.076585 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-09-11 00:58:48.076598 | orchestrator | Thursday 11 September 2025 00:58:26 +0000 (0:00:02.774) 0:00:45.392 **** 2025-09-11 00:58:48.076610 | orchestrator | changed: [testbed-manager] 2025-09-11 00:58:48.076623 | orchestrator | 2025-09-11 00:58:48.076635 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-09-11 00:58:48.076648 | orchestrator | Thursday 11 September 2025 00:58:27 +0000 (0:00:00.719) 0:00:46.111 **** 2025-09-11 00:58:48.076660 | orchestrator | changed: [testbed-manager] 2025-09-11 00:58:48.076672 | orchestrator | 2025-09-11 00:58:48.076684 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-09-11 00:58:48.076696 | orchestrator | Thursday 11 September 2025 00:58:28 +0000 (0:00:00.620) 0:00:46.732 **** 2025-09-11 00:58:48.076709 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-09-11 00:58:48.076722 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-09-11 00:58:48.076734 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-09-11 00:58:48.076747 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-09-11 00:58:48.076759 | orchestrator | 2025-09-11 00:58:48.076770 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:58:48.076781 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-11 00:58:48.076792 | orchestrator | 2025-09-11 00:58:48.076803 | orchestrator | 2025-09-11 00:58:48.076858 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:58:48.076871 | orchestrator | Thursday 11 September 2025 00:58:29 +0000 (0:00:01.343) 0:00:48.075 **** 2025-09-11 00:58:48.076882 | orchestrator | =============================================================================== 2025-09-11 00:58:48.076893 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 34.20s 2025-09-11 00:58:48.076904 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.49s 2025-09-11 00:58:48.076915 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 2.77s 2025-09-11 00:58:48.076925 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.34s 2025-09-11 00:58:48.076936 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.06s 2025-09-11 00:58:48.076946 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.04s 2025-09-11 00:58:48.076957 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.85s 2025-09-11 00:58:48.076968 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.76s 2025-09-11 00:58:48.076978 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.72s 2025-09-11 00:58:48.076989 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.62s 2025-09-11 00:58:48.077000 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.39s 2025-09-11 00:58:48.077089 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.29s 2025-09-11 00:58:48.077100 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.20s 2025-09-11 00:58:48.077111 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2025-09-11 00:58:48.077153 | orchestrator | 2025-09-11 00:58:48.077164 | orchestrator | 2025-09-11 00:58:48.077185 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-11 00:58:48.077196 | orchestrator | 2025-09-11 00:58:48.077206 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-11 00:58:48.077217 | orchestrator | Thursday 11 September 2025 00:58:33 +0000 (0:00:00.171) 0:00:00.171 **** 2025-09-11 00:58:48.077228 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:58:48.077239 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:58:48.077249 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:58:48.077260 | orchestrator | 2025-09-11 00:58:48.077271 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-11 00:58:48.077281 | orchestrator | Thursday 11 September 2025 00:58:34 +0000 (0:00:00.268) 0:00:00.439 **** 2025-09-11 00:58:48.077292 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-11 00:58:48.077303 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-11 00:58:48.077314 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-11 00:58:48.077324 | orchestrator | 2025-09-11 00:58:48.077335 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-09-11 00:58:48.077346 | orchestrator | 2025-09-11 00:58:48.077357 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-09-11 00:58:48.077367 | orchestrator | Thursday 11 September 2025 00:58:34 +0000 (0:00:00.676) 0:00:01.116 **** 2025-09-11 00:58:48.077378 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:58:48.077389 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:58:48.077405 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:58:48.077416 | orchestrator | 2025-09-11 00:58:48.077427 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:58:48.077438 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:58:48.077449 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:58:48.077460 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 00:58:48.077471 | orchestrator | 2025-09-11 00:58:48.077482 | orchestrator | 2025-09-11 00:58:48.077493 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:58:48.077503 | orchestrator | Thursday 11 September 2025 00:58:35 +0000 (0:00:00.776) 0:00:01.892 **** 2025-09-11 00:58:48.077514 | orchestrator | =============================================================================== 2025-09-11 00:58:48.077525 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.78s 2025-09-11 00:58:48.077535 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.68s 2025-09-11 00:58:48.077546 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.27s 2025-09-11 00:58:48.077556 | orchestrator | 2025-09-11 00:58:48.077567 | orchestrator | 2025-09-11 00:58:48.077578 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-11 00:58:48.077588 | orchestrator | 2025-09-11 00:58:48.077599 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-11 00:58:48.077610 | orchestrator | Thursday 11 September 2025 00:56:06 +0000 (0:00:00.262) 0:00:00.262 **** 2025-09-11 00:58:48.077621 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:58:48.077631 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:58:48.077642 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:58:48.077653 | orchestrator | 2025-09-11 00:58:48.077663 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-11 00:58:48.077674 | orchestrator | Thursday 11 September 2025 00:56:06 +0000 (0:00:00.320) 0:00:00.583 **** 2025-09-11 00:58:48.077685 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-11 00:58:48.077695 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-11 00:58:48.077706 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-11 00:58:48.077724 | orchestrator | 2025-09-11 00:58:48.077737 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-09-11 00:58:48.077750 | orchestrator | 2025-09-11 00:58:48.077797 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-11 00:58:48.077812 | orchestrator | Thursday 11 September 2025 00:56:06 +0000 (0:00:00.447) 0:00:01.030 **** 2025-09-11 00:58:48.077825 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:58:48.077837 | orchestrator | 2025-09-11 00:58:48.077850 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-09-11 00:58:48.077863 | orchestrator | Thursday 11 September 2025 00:56:07 +0000 (0:00:00.529) 0:00:01.560 **** 2025-09-11 00:58:48.077881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-11 00:58:48.077905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-11 00:58:48.077922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-11 00:58:48.077936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-11 00:58:48.077994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-11 00:58:48.078009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-11 00:58:48.078096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-11 00:58:48.078163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-11 00:58:48.078178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-11 00:58:48.078189 | orchestrator | 2025-09-11 00:58:48.078201 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-09-11 00:58:48.078212 | orchestrator | Thursday 11 September 2025 00:56:09 +0000 (0:00:01.735) 0:00:03.295 **** 2025-09-11 00:58:48.078223 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-09-11 00:58:48.078243 | orchestrator | 2025-09-11 00:58:48.078255 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-09-11 00:58:48.078266 | orchestrator | Thursday 11 September 2025 00:56:09 +0000 (0:00:00.795) 0:00:04.091 **** 2025-09-11 00:58:48.078276 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:58:48.078287 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:58:48.078298 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:58:48.078309 | orchestrator | 2025-09-11 00:58:48.078320 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-09-11 00:58:48.078330 | orchestrator | Thursday 11 September 2025 00:56:10 +0000 (0:00:00.471) 0:00:04.562 **** 2025-09-11 00:58:48.078341 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-11 00:58:48.078352 | orchestrator | 2025-09-11 00:58:48.078363 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-11 00:58:48.078374 | orchestrator | Thursday 11 September 2025 00:56:11 +0000 (0:00:00.655) 0:00:05.217 **** 2025-09-11 00:58:48.078385 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:58:48.078396 | orchestrator | 2025-09-11 00:58:48.078414 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-09-11 00:58:48.078425 | orchestrator | Thursday 11 September 2025 00:56:11 +0000 (0:00:00.476) 0:00:05.694 **** 2025-09-11 00:58:48.078437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-11 00:58:48.078450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-11 00:58:48.078466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-11 00:58:48.078485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-11 00:58:48.078591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-11 00:58:48.078608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-11 00:58:48.078618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-11 00:58:48.078628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-11 00:58:48.078644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-11 00:58:48.078661 | orchestrator | 2025-09-11 00:58:48.078671 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-09-11 00:58:48.078681 | orchestrator | Thursday 11 September 2025 00:56:14 +0000 (0:00:03.208) 0:00:08.902 **** 2025-09-11 00:58:48.078692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-11 00:58:48.078709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-11 00:58:48.078719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-11 00:58:48.078730 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:58:48.078740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-11 00:58:48.078756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-11 00:58:48.078771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-11 00:58:48.078782 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:58:48.078799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-11 00:58:48.078810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-11 00:58:48.078820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-11 00:58:48.078830 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:58:48.078840 | orchestrator | 2025-09-11 00:58:48.078849 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-09-11 00:58:48.078859 | orchestrator | Thursday 11 September 2025 00:56:15 +0000 (0:00:00.649) 0:00:09.552 **** 2025-09-11 00:58:48.078873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-11 00:58:48.078889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-11 00:58:48.078900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-11 00:58:48.078910 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:58:48.078926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-11 00:58:48.078937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-11 00:58:48.078947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-11 00:58:48.078962 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:58:48.078977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-11 00:58:48.078987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-11 00:58:48.079003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-11 00:58:48.079013 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:58:48.079023 | orchestrator | 2025-09-11 00:58:48.079033 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-09-11 00:58:48.079042 | orchestrator | Thursday 11 September 2025 00:56:16 +0000 (0:00:00.696) 0:00:10.249 **** 2025-09-11 00:58:48.079053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-11 00:58:48.079068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-11 00:58:48.079084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-11 00:58:48.079100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-11 00:58:48.079111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-11 00:58:48.079137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-11 00:58:48.079147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-11 00:58:48.079171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-11 00:58:48.079181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-11 00:58:48.079191 | orchestrator | 2025-09-11 00:58:48.079201 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-09-11 00:58:48.079211 | orchestrator | Thursday 11 September 2025 00:56:19 +0000 (0:00:03.030) 0:00:13.279 **** 2025-09-11 00:58:48.079228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-11 00:58:48.079239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-11 00:58:48.079250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-11 00:58:48.079269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-11 00:58:48.079280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-11 00:58:48.079291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-11 00:58:48.079307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-11 00:58:48.079317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-11 00:58:48.079332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-11 00:58:48.079343 | orchestrator | 2025-09-11 00:58:48.079352 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-09-11 00:58:48.079362 | orchestrator | Thursday 11 September 2025 00:56:24 +0000 (0:00:05.264) 0:00:18.544 **** 2025-09-11 00:58:48.079372 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:58:48.079382 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:58:48.079392 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:58:48.079401 | orchestrator | 2025-09-11 00:58:48.079411 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-09-11 00:58:48.079421 | orchestrator | Thursday 11 September 2025 00:56:25 +0000 (0:00:01.409) 0:00:19.953 **** 2025-09-11 00:58:48.079430 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:58:48.079444 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:58:48.079454 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:58:48.079463 | orchestrator | 2025-09-11 00:58:48.079473 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-09-11 00:58:48.079482 | orchestrator | Thursday 11 September 2025 00:56:26 +0000 (0:00:00.664) 0:00:20.618 **** 2025-09-11 00:58:48.079492 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:58:48.079502 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:58:48.079512 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:58:48.079521 | orchestrator | 2025-09-11 00:58:48.079531 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-09-11 00:58:48.079540 | orchestrator | Thursday 11 September 2025 00:56:26 +0000 (0:00:00.291) 0:00:20.910 **** 2025-09-11 00:58:48.079550 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:58:48.079560 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:58:48.079569 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:58:48.079579 | orchestrator | 2025-09-11 00:58:48.079589 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-09-11 00:58:48.079598 | orchestrator | Thursday 11 September 2025 00:56:27 +0000 (0:00:00.479) 0:00:21.390 **** 2025-09-11 00:58:48.079609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-11 00:58:48.079625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-11 00:58:48.079642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-11 00:58:48.079653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-11 00:58:48.079668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-11 00:58:48.079679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-11 00:58:48.079695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-11 00:58:48.079712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-11 00:58:48.079722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-11 00:58:48.079732 | orchestrator | 2025-09-11 00:58:48.079742 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-11 00:58:48.079752 | orchestrator | Thursday 11 September 2025 00:56:29 +0000 (0:00:02.302) 0:00:23.692 **** 2025-09-11 00:58:48.079762 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:58:48.079771 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:58:48.079781 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:58:48.079790 | orchestrator | 2025-09-11 00:58:48.079800 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-09-11 00:58:48.079810 | orchestrator | Thursday 11 September 2025 00:56:29 +0000 (0:00:00.274) 0:00:23.967 **** 2025-09-11 00:58:48.079820 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-11 00:58:48.079830 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-11 00:58:48.079844 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-11 00:58:48.079854 | orchestrator | 2025-09-11 00:58:48.079863 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-09-11 00:58:48.079873 | orchestrator | Thursday 11 September 2025 00:56:31 +0000 (0:00:01.496) 0:00:25.463 **** 2025-09-11 00:58:48.079883 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-11 00:58:48.079893 | orchestrator | 2025-09-11 00:58:48.079902 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-09-11 00:58:48.079912 | orchestrator | Thursday 11 September 2025 00:56:32 +0000 (0:00:00.883) 0:00:26.347 **** 2025-09-11 00:58:48.079922 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:58:48.079931 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:58:48.079941 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:58:48.079951 | orchestrator | 2025-09-11 00:58:48.079960 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-09-11 00:58:48.079970 | orchestrator | Thursday 11 September 2025 00:56:32 +0000 (0:00:00.737) 0:00:27.084 **** 2025-09-11 00:58:48.079980 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-11 00:58:48.079989 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-11 00:58:48.080105 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-11 00:58:48.080131 | orchestrator | 2025-09-11 00:58:48.080141 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-09-11 00:58:48.080157 | orchestrator | Thursday 11 September 2025 00:56:33 +0000 (0:00:01.084) 0:00:28.169 **** 2025-09-11 00:58:48.080167 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:58:48.080177 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:58:48.080187 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:58:48.080196 | orchestrator | 2025-09-11 00:58:48.080206 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-09-11 00:58:48.080215 | orchestrator | Thursday 11 September 2025 00:56:34 +0000 (0:00:00.279) 0:00:28.448 **** 2025-09-11 00:58:48.080225 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-11 00:58:48.080234 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-11 00:58:48.080244 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-11 00:58:48.080254 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-11 00:58:48.080264 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-11 00:58:48.080280 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-11 00:58:48.080290 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-11 00:58:48.080300 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-11 00:58:48.080310 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-11 00:58:48.080319 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-11 00:58:48.080329 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-11 00:58:48.080338 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-11 00:58:48.080348 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-11 00:58:48.080358 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-11 00:58:48.080367 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-11 00:58:48.080377 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-11 00:58:48.080387 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-11 00:58:48.080396 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-11 00:58:48.080406 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-11 00:58:48.080415 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-11 00:58:48.080425 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-11 00:58:48.080434 | orchestrator | 2025-09-11 00:58:48.080444 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-09-11 00:58:48.080453 | orchestrator | Thursday 11 September 2025 00:56:43 +0000 (0:00:09.008) 0:00:37.457 **** 2025-09-11 00:58:48.080463 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-11 00:58:48.080472 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-11 00:58:48.080482 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-11 00:58:48.080492 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-11 00:58:48.080501 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-11 00:58:48.080515 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-11 00:58:48.080525 | orchestrator | 2025-09-11 00:58:48.080539 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-09-11 00:58:48.080549 | orchestrator | Thursday 11 September 2025 00:56:46 +0000 (0:00:02.997) 0:00:40.455 **** 2025-09-11 00:58:48.080559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-11 00:58:48.080577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-11 00:58:48.080589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-11 00:58:48.080599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-11 00:58:48.080618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-11 00:58:48.080628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-11 00:58:48.080639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-11 00:58:48.080654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-11 00:58:48.080665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-11 00:58:48.080674 | orchestrator | 2025-09-11 00:58:48.080684 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-11 00:58:48.080694 | orchestrator | Thursday 11 September 2025 00:56:48 +0000 (0:00:02.390) 0:00:42.845 **** 2025-09-11 00:58:48.080704 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:58:48.080713 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:58:48.080723 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:58:48.080734 | orchestrator | 2025-09-11 00:58:48.080746 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-09-11 00:58:48.080757 | orchestrator | Thursday 11 September 2025 00:56:48 +0000 (0:00:00.299) 0:00:43.144 **** 2025-09-11 00:58:48.080769 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:58:48.080780 | orchestrator | 2025-09-11 00:58:48.080791 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-09-11 00:58:48.080811 | orchestrator | Thursday 11 September 2025 00:56:51 +0000 (0:00:02.484) 0:00:45.628 **** 2025-09-11 00:58:48.080822 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:58:48.080833 | orchestrator | 2025-09-11 00:58:48.080843 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-09-11 00:58:48.080855 | orchestrator | Thursday 11 September 2025 00:56:53 +0000 (0:00:02.163) 0:00:47.792 **** 2025-09-11 00:58:48.080867 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:58:48.080878 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:58:48.080889 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:58:48.080900 | orchestrator | 2025-09-11 00:58:48.080911 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-09-11 00:58:48.080923 | orchestrator | Thursday 11 September 2025 00:56:54 +0000 (0:00:00.921) 0:00:48.714 **** 2025-09-11 00:58:48.080934 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:58:48.080945 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:58:48.080956 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:58:48.080967 | orchestrator | 2025-09-11 00:58:48.080979 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-09-11 00:58:48.080994 | orchestrator | Thursday 11 September 2025 00:56:54 +0000 (0:00:00.471) 0:00:49.186 **** 2025-09-11 00:58:48.081005 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:58:48.081016 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:58:48.081028 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:58:48.081039 | orchestrator | 2025-09-11 00:58:48.081050 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-09-11 00:58:48.081061 | orchestrator | Thursday 11 September 2025 00:56:55 +0000 (0:00:00.307) 0:00:49.493 **** 2025-09-11 00:58:48.081074 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:58:48.081084 | orchestrator | 2025-09-11 00:58:48.081095 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-09-11 00:58:48.081104 | orchestrator | Thursday 11 September 2025 00:57:09 +0000 (0:00:14.441) 0:01:03.935 **** 2025-09-11 00:58:48.081114 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:58:48.081170 | orchestrator | 2025-09-11 00:58:48.081180 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-11 00:58:48.081190 | orchestrator | Thursday 11 September 2025 00:57:19 +0000 (0:00:10.189) 0:01:14.124 **** 2025-09-11 00:58:48.081199 | orchestrator | 2025-09-11 00:58:48.081209 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-11 00:58:48.081218 | orchestrator | Thursday 11 September 2025 00:57:19 +0000 (0:00:00.059) 0:01:14.184 **** 2025-09-11 00:58:48.081228 | orchestrator | 2025-09-11 00:58:48.081237 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-11 00:58:48.081247 | orchestrator | Thursday 11 September 2025 00:57:20 +0000 (0:00:00.061) 0:01:14.245 **** 2025-09-11 00:58:48.081256 | orchestrator | 2025-09-11 00:58:48.081266 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-09-11 00:58:48.081275 | orchestrator | Thursday 11 September 2025 00:57:20 +0000 (0:00:00.064) 0:01:14.310 **** 2025-09-11 00:58:48.081285 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:58:48.081294 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:58:48.081304 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:58:48.081313 | orchestrator | 2025-09-11 00:58:48.081323 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-09-11 00:58:48.081333 | orchestrator | Thursday 11 September 2025 00:57:39 +0000 (0:00:19.154) 0:01:33.465 **** 2025-09-11 00:58:48.081342 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:58:48.081352 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:58:48.081361 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:58:48.081371 | orchestrator | 2025-09-11 00:58:48.081380 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-09-11 00:58:48.081390 | orchestrator | Thursday 11 September 2025 00:57:48 +0000 (0:00:09.562) 0:01:43.028 **** 2025-09-11 00:58:48.081405 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:58:48.081414 | orchestrator | changed: [testbed-node-1] 2025-09-11 00:58:48.081429 | orchestrator | changed: [testbed-node-2] 2025-09-11 00:58:48.081439 | orchestrator | 2025-09-11 00:58:48.081449 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-11 00:58:48.081458 | orchestrator | Thursday 11 September 2025 00:57:55 +0000 (0:00:06.632) 0:01:49.660 **** 2025-09-11 00:58:48.081468 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 00:58:48.081478 | orchestrator | 2025-09-11 00:58:48.081487 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-09-11 00:58:48.081497 | orchestrator | Thursday 11 September 2025 00:57:56 +0000 (0:00:00.583) 0:01:50.244 **** 2025-09-11 00:58:48.081506 | orchestrator | ok: [testbed-node-1] 2025-09-11 00:58:48.081516 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:58:48.081526 | orchestrator | ok: [testbed-node-2] 2025-09-11 00:58:48.081535 | orchestrator | 2025-09-11 00:58:48.081545 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-09-11 00:58:48.081554 | orchestrator | Thursday 11 September 2025 00:57:56 +0000 (0:00:00.715) 0:01:50.959 **** 2025-09-11 00:58:48.081564 | orchestrator | changed: [testbed-node-0] 2025-09-11 00:58:48.081573 | orchestrator | 2025-09-11 00:58:48.081582 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-09-11 00:58:48.081592 | orchestrator | Thursday 11 September 2025 00:57:58 +0000 (0:00:01.738) 0:01:52.698 **** 2025-09-11 00:58:48.081602 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-09-11 00:58:48.081611 | orchestrator | 2025-09-11 00:58:48.081621 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-09-11 00:58:48.081631 | orchestrator | Thursday 11 September 2025 00:58:09 +0000 (0:00:11.459) 0:02:04.158 **** 2025-09-11 00:58:48.081640 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-09-11 00:58:48.081650 | orchestrator | 2025-09-11 00:58:48.081659 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-09-11 00:58:48.081669 | orchestrator | Thursday 11 September 2025 00:58:32 +0000 (0:00:22.846) 0:02:27.004 **** 2025-09-11 00:58:48.081678 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-09-11 00:58:48.081686 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-09-11 00:58:48.081694 | orchestrator | 2025-09-11 00:58:48.081702 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-09-11 00:58:48.081710 | orchestrator | Thursday 11 September 2025 00:58:39 +0000 (0:00:07.067) 0:02:34.071 **** 2025-09-11 00:58:48.081718 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:58:48.081725 | orchestrator | 2025-09-11 00:58:48.081733 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-09-11 00:58:48.081741 | orchestrator | Thursday 11 September 2025 00:58:40 +0000 (0:00:00.269) 0:02:34.341 **** 2025-09-11 00:58:48.081749 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:58:48.081756 | orchestrator | 2025-09-11 00:58:48.081764 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-09-11 00:58:48.081772 | orchestrator | Thursday 11 September 2025 00:58:40 +0000 (0:00:00.091) 0:02:34.433 **** 2025-09-11 00:58:48.081779 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:58:48.081787 | orchestrator | 2025-09-11 00:58:48.081795 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-09-11 00:58:48.081806 | orchestrator | Thursday 11 September 2025 00:58:40 +0000 (0:00:00.150) 0:02:34.584 **** 2025-09-11 00:58:48.081814 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:58:48.081822 | orchestrator | 2025-09-11 00:58:48.081830 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-09-11 00:58:48.081838 | orchestrator | Thursday 11 September 2025 00:58:40 +0000 (0:00:00.501) 0:02:35.085 **** 2025-09-11 00:58:48.081850 | orchestrator | ok: [testbed-node-0] 2025-09-11 00:58:48.081858 | orchestrator | 2025-09-11 00:58:48.081866 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-11 00:58:48.081873 | orchestrator | Thursday 11 September 2025 00:58:44 +0000 (0:00:03.791) 0:02:38.877 **** 2025-09-11 00:58:48.081881 | orchestrator | skipping: [testbed-node-0] 2025-09-11 00:58:48.081889 | orchestrator | skipping: [testbed-node-1] 2025-09-11 00:58:48.081897 | orchestrator | skipping: [testbed-node-2] 2025-09-11 00:58:48.081905 | orchestrator | 2025-09-11 00:58:48.081912 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 00:58:48.081921 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-09-11 00:58:48.081929 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-11 00:58:48.081937 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-11 00:58:48.081945 | orchestrator | 2025-09-11 00:58:48.081953 | orchestrator | 2025-09-11 00:58:48.081961 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 00:58:48.081968 | orchestrator | Thursday 11 September 2025 00:58:45 +0000 (0:00:00.339) 0:02:39.217 **** 2025-09-11 00:58:48.081976 | orchestrator | =============================================================================== 2025-09-11 00:58:48.081984 | orchestrator | service-ks-register : keystone | Creating services --------------------- 22.85s 2025-09-11 00:58:48.081992 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 19.15s 2025-09-11 00:58:48.082000 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.44s 2025-09-11 00:58:48.082007 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.46s 2025-09-11 00:58:48.082036 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.19s 2025-09-11 00:58:48.082050 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.56s 2025-09-11 00:58:48.082058 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.01s 2025-09-11 00:58:48.082066 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.08s 2025-09-11 00:58:48.082074 | orchestrator | keystone : Restart keystone container ----------------------------------- 6.63s 2025-09-11 00:58:48.082082 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.26s 2025-09-11 00:58:48.082090 | orchestrator | keystone : Creating default user role ----------------------------------- 3.79s 2025-09-11 00:58:48.082098 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.21s 2025-09-11 00:58:48.082106 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.03s 2025-09-11 00:58:48.082113 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.00s 2025-09-11 00:58:48.082134 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.48s 2025-09-11 00:58:48.082142 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.39s 2025-09-11 00:58:48.082150 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.30s 2025-09-11 00:58:48.082157 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.16s 2025-09-11 00:58:48.082165 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.74s 2025-09-11 00:58:48.082173 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.74s 2025-09-11 00:58:48.082181 | orchestrator | 2025-09-11 00:58:48 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 00:58:48.082189 | orchestrator | 2025-09-11 00:58:48 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 00:58:48.082197 | orchestrator | 2025-09-11 00:58:48 | INFO  | Task 1da95160-6498-4909-85a7-3aae86a782ca is in state STARTED 2025-09-11 00:58:48.082209 | orchestrator | 2025-09-11 00:58:48 | INFO  | Task 1428679f-9ece-42d3-89a2-57aa0b60b0c4 is in state STARTED 2025-09-11 00:58:48.082217 | orchestrator | 2025-09-11 00:58:48 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:58:51.111954 | orchestrator | 2025-09-11 00:58:51 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 00:58:51.112037 | orchestrator | 2025-09-11 00:58:51 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 00:58:51.119230 | orchestrator | 2025-09-11 00:58:51 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 00:58:51.121341 | orchestrator | 2025-09-11 00:58:51 | INFO  | Task 1da95160-6498-4909-85a7-3aae86a782ca is in state STARTED 2025-09-11 00:58:51.123130 | orchestrator | 2025-09-11 00:58:51 | INFO  | Task 1428679f-9ece-42d3-89a2-57aa0b60b0c4 is in state STARTED 2025-09-11 00:58:51.123413 | orchestrator | 2025-09-11 00:58:51 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:58:54.149422 | orchestrator | 2025-09-11 00:58:54 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 00:58:54.149687 | orchestrator | 2025-09-11 00:58:54 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 00:58:54.150462 | orchestrator | 2025-09-11 00:58:54 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 00:58:54.151191 | orchestrator | 2025-09-11 00:58:54 | INFO  | Task 1da95160-6498-4909-85a7-3aae86a782ca is in state STARTED 2025-09-11 00:58:54.151986 | orchestrator | 2025-09-11 00:58:54 | INFO  | Task 1428679f-9ece-42d3-89a2-57aa0b60b0c4 is in state STARTED 2025-09-11 00:58:54.152012 | orchestrator | 2025-09-11 00:58:54 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:58:57.191053 | orchestrator | 2025-09-11 00:58:57 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 00:58:57.193319 | orchestrator | 2025-09-11 00:58:57 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 00:58:57.195903 | orchestrator | 2025-09-11 00:58:57 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 00:58:57.197199 | orchestrator | 2025-09-11 00:58:57 | INFO  | Task 1da95160-6498-4909-85a7-3aae86a782ca is in state STARTED 2025-09-11 00:58:57.198912 | orchestrator | 2025-09-11 00:58:57 | INFO  | Task 1428679f-9ece-42d3-89a2-57aa0b60b0c4 is in state STARTED 2025-09-11 00:58:57.198953 | orchestrator | 2025-09-11 00:58:57 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:59:00.430005 | orchestrator | 2025-09-11 00:59:00 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 00:59:00.430206 | orchestrator | 2025-09-11 00:59:00 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 00:59:00.435412 | orchestrator | 2025-09-11 00:59:00 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 00:59:00.436319 | orchestrator | 2025-09-11 00:59:00 | INFO  | Task 1da95160-6498-4909-85a7-3aae86a782ca is in state STARTED 2025-09-11 00:59:00.437298 | orchestrator | 2025-09-11 00:59:00 | INFO  | Task 1428679f-9ece-42d3-89a2-57aa0b60b0c4 is in state STARTED 2025-09-11 00:59:00.437493 | orchestrator | 2025-09-11 00:59:00 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:59:03.468868 | orchestrator | 2025-09-11 00:59:03 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 00:59:03.468967 | orchestrator | 2025-09-11 00:59:03 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 00:59:03.469019 | orchestrator | 2025-09-11 00:59:03 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 00:59:03.469032 | orchestrator | 2025-09-11 00:59:03 | INFO  | Task 1da95160-6498-4909-85a7-3aae86a782ca is in state STARTED 2025-09-11 00:59:03.469043 | orchestrator | 2025-09-11 00:59:03 | INFO  | Task 1428679f-9ece-42d3-89a2-57aa0b60b0c4 is in state STARTED 2025-09-11 00:59:03.469054 | orchestrator | 2025-09-11 00:59:03 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:59:06.484527 | orchestrator | 2025-09-11 00:59:06 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 00:59:06.485089 | orchestrator | 2025-09-11 00:59:06 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 00:59:06.485711 | orchestrator | 2025-09-11 00:59:06 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 00:59:06.486648 | orchestrator | 2025-09-11 00:59:06 | INFO  | Task 1da95160-6498-4909-85a7-3aae86a782ca is in state STARTED 2025-09-11 00:59:06.487190 | orchestrator | 2025-09-11 00:59:06 | INFO  | Task 1428679f-9ece-42d3-89a2-57aa0b60b0c4 is in state STARTED 2025-09-11 00:59:06.487340 | orchestrator | 2025-09-11 00:59:06 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:59:09.515977 | orchestrator | 2025-09-11 00:59:09 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 00:59:09.517230 | orchestrator | 2025-09-11 00:59:09 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 00:59:09.518780 | orchestrator | 2025-09-11 00:59:09 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 00:59:09.519961 | orchestrator | 2025-09-11 00:59:09 | INFO  | Task 1da95160-6498-4909-85a7-3aae86a782ca is in state STARTED 2025-09-11 00:59:09.522710 | orchestrator | 2025-09-11 00:59:09 | INFO  | Task 1428679f-9ece-42d3-89a2-57aa0b60b0c4 is in state STARTED 2025-09-11 00:59:09.523492 | orchestrator | 2025-09-11 00:59:09 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:59:12.550972 | orchestrator | 2025-09-11 00:59:12 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 00:59:12.552428 | orchestrator | 2025-09-11 00:59:12 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 00:59:12.554143 | orchestrator | 2025-09-11 00:59:12 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 00:59:12.555984 | orchestrator | 2025-09-11 00:59:12 | INFO  | Task 1da95160-6498-4909-85a7-3aae86a782ca is in state STARTED 2025-09-11 00:59:12.557736 | orchestrator | 2025-09-11 00:59:12 | INFO  | Task 1428679f-9ece-42d3-89a2-57aa0b60b0c4 is in state STARTED 2025-09-11 00:59:12.557812 | orchestrator | 2025-09-11 00:59:12 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:59:15.581638 | orchestrator | 2025-09-11 00:59:15 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 00:59:15.581820 | orchestrator | 2025-09-11 00:59:15 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 00:59:15.582649 | orchestrator | 2025-09-11 00:59:15 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 00:59:15.584099 | orchestrator | 2025-09-11 00:59:15 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 00:59:15.584778 | orchestrator | 2025-09-11 00:59:15 | INFO  | Task 1da95160-6498-4909-85a7-3aae86a782ca is in state SUCCESS 2025-09-11 00:59:15.585654 | orchestrator | 2025-09-11 00:59:15 | INFO  | Task 1428679f-9ece-42d3-89a2-57aa0b60b0c4 is in state STARTED 2025-09-11 00:59:15.585719 | orchestrator | 2025-09-11 00:59:15 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:59:18.624543 | orchestrator | 2025-09-11 00:59:18 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 00:59:18.657545 | orchestrator | 2025-09-11 00:59:18 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 00:59:18.657608 | orchestrator | 2025-09-11 00:59:18 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 00:59:18.657622 | orchestrator | 2025-09-11 00:59:18 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 00:59:18.657633 | orchestrator | 2025-09-11 00:59:18 | INFO  | Task 1428679f-9ece-42d3-89a2-57aa0b60b0c4 is in state STARTED 2025-09-11 00:59:18.657645 | orchestrator | 2025-09-11 00:59:18 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:59:21.652939 | orchestrator | 2025-09-11 00:59:21 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 00:59:21.653029 | orchestrator | 2025-09-11 00:59:21 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 00:59:21.653734 | orchestrator | 2025-09-11 00:59:21 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 00:59:21.654434 | orchestrator | 2025-09-11 00:59:21 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 00:59:21.655022 | orchestrator | 2025-09-11 00:59:21 | INFO  | Task 1428679f-9ece-42d3-89a2-57aa0b60b0c4 is in state STARTED 2025-09-11 00:59:21.655052 | orchestrator | 2025-09-11 00:59:21 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:59:24.673991 | orchestrator | 2025-09-11 00:59:24 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 00:59:24.678239 | orchestrator | 2025-09-11 00:59:24 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 00:59:24.678482 | orchestrator | 2025-09-11 00:59:24 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 00:59:24.679103 | orchestrator | 2025-09-11 00:59:24 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 00:59:24.680245 | orchestrator | 2025-09-11 00:59:24 | INFO  | Task 1428679f-9ece-42d3-89a2-57aa0b60b0c4 is in state STARTED 2025-09-11 00:59:24.680270 | orchestrator | 2025-09-11 00:59:24 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:59:27.712191 | orchestrator | 2025-09-11 00:59:27 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 00:59:27.713601 | orchestrator | 2025-09-11 00:59:27 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 00:59:27.714951 | orchestrator | 2025-09-11 00:59:27 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 00:59:27.715575 | orchestrator | 2025-09-11 00:59:27 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 00:59:27.716306 | orchestrator | 2025-09-11 00:59:27 | INFO  | Task 1428679f-9ece-42d3-89a2-57aa0b60b0c4 is in state STARTED 2025-09-11 00:59:27.716328 | orchestrator | 2025-09-11 00:59:27 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:59:30.741443 | orchestrator | 2025-09-11 00:59:30 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 00:59:30.741692 | orchestrator | 2025-09-11 00:59:30 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 00:59:30.742385 | orchestrator | 2025-09-11 00:59:30 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 00:59:30.742908 | orchestrator | 2025-09-11 00:59:30 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 00:59:30.743936 | orchestrator | 2025-09-11 00:59:30 | INFO  | Task 1428679f-9ece-42d3-89a2-57aa0b60b0c4 is in state STARTED 2025-09-11 00:59:30.744136 | orchestrator | 2025-09-11 00:59:30 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:59:33.773986 | orchestrator | 2025-09-11 00:59:33 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 00:59:33.774764 | orchestrator | 2025-09-11 00:59:33 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 00:59:33.776558 | orchestrator | 2025-09-11 00:59:33 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 00:59:33.777203 | orchestrator | 2025-09-11 00:59:33 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 00:59:33.777862 | orchestrator | 2025-09-11 00:59:33 | INFO  | Task 1428679f-9ece-42d3-89a2-57aa0b60b0c4 is in state STARTED 2025-09-11 00:59:33.777891 | orchestrator | 2025-09-11 00:59:33 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:59:36.800011 | orchestrator | 2025-09-11 00:59:36 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 00:59:36.800442 | orchestrator | 2025-09-11 00:59:36 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 00:59:36.800908 | orchestrator | 2025-09-11 00:59:36 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 00:59:36.801698 | orchestrator | 2025-09-11 00:59:36 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 00:59:36.803990 | orchestrator | 2025-09-11 00:59:36 | INFO  | Task 1428679f-9ece-42d3-89a2-57aa0b60b0c4 is in state STARTED 2025-09-11 00:59:36.804016 | orchestrator | 2025-09-11 00:59:36 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:59:39.829259 | orchestrator | 2025-09-11 00:59:39 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 00:59:39.830679 | orchestrator | 2025-09-11 00:59:39 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 00:59:39.831326 | orchestrator | 2025-09-11 00:59:39 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 00:59:39.832707 | orchestrator | 2025-09-11 00:59:39 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 00:59:39.833298 | orchestrator | 2025-09-11 00:59:39 | INFO  | Task 1428679f-9ece-42d3-89a2-57aa0b60b0c4 is in state STARTED 2025-09-11 00:59:39.834672 | orchestrator | 2025-09-11 00:59:39 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:59:42.866756 | orchestrator | 2025-09-11 00:59:42 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 00:59:42.866859 | orchestrator | 2025-09-11 00:59:42 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 00:59:42.867231 | orchestrator | 2025-09-11 00:59:42 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 00:59:42.868977 | orchestrator | 2025-09-11 00:59:42 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 00:59:42.869806 | orchestrator | 2025-09-11 00:59:42 | INFO  | Task 1428679f-9ece-42d3-89a2-57aa0b60b0c4 is in state STARTED 2025-09-11 00:59:42.869828 | orchestrator | 2025-09-11 00:59:42 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:59:45.891866 | orchestrator | 2025-09-11 00:59:45 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 00:59:45.892047 | orchestrator | 2025-09-11 00:59:45 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 00:59:45.892787 | orchestrator | 2025-09-11 00:59:45 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 00:59:45.893280 | orchestrator | 2025-09-11 00:59:45 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 00:59:45.893907 | orchestrator | 2025-09-11 00:59:45 | INFO  | Task 1428679f-9ece-42d3-89a2-57aa0b60b0c4 is in state STARTED 2025-09-11 00:59:45.894304 | orchestrator | 2025-09-11 00:59:45 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:59:48.921700 | orchestrator | 2025-09-11 00:59:48 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 00:59:48.921894 | orchestrator | 2025-09-11 00:59:48 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 00:59:48.922590 | orchestrator | 2025-09-11 00:59:48 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 00:59:48.923243 | orchestrator | 2025-09-11 00:59:48 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 00:59:48.923770 | orchestrator | 2025-09-11 00:59:48 | INFO  | Task 1428679f-9ece-42d3-89a2-57aa0b60b0c4 is in state STARTED 2025-09-11 00:59:48.923792 | orchestrator | 2025-09-11 00:59:48 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:59:51.947953 | orchestrator | 2025-09-11 00:59:51 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 00:59:51.948036 | orchestrator | 2025-09-11 00:59:51 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 00:59:51.948445 | orchestrator | 2025-09-11 00:59:51 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 00:59:51.949756 | orchestrator | 2025-09-11 00:59:51 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 00:59:51.950329 | orchestrator | 2025-09-11 00:59:51 | INFO  | Task 1428679f-9ece-42d3-89a2-57aa0b60b0c4 is in state STARTED 2025-09-11 00:59:51.950354 | orchestrator | 2025-09-11 00:59:51 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:59:54.975871 | orchestrator | 2025-09-11 00:59:54 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 00:59:54.976027 | orchestrator | 2025-09-11 00:59:54 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 00:59:54.976517 | orchestrator | 2025-09-11 00:59:54 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 00:59:54.977009 | orchestrator | 2025-09-11 00:59:54 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 00:59:54.977629 | orchestrator | 2025-09-11 00:59:54 | INFO  | Task 1428679f-9ece-42d3-89a2-57aa0b60b0c4 is in state STARTED 2025-09-11 00:59:54.977647 | orchestrator | 2025-09-11 00:59:54 | INFO  | Wait 1 second(s) until the next check 2025-09-11 00:59:57.999609 | orchestrator | 2025-09-11 00:59:58 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 00:59:57.999804 | orchestrator | 2025-09-11 00:59:58 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 00:59:58.002230 | orchestrator | 2025-09-11 00:59:58 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 00:59:58.002639 | orchestrator | 2025-09-11 00:59:58 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 00:59:58.003234 | orchestrator | 2025-09-11 00:59:58 | INFO  | Task 1428679f-9ece-42d3-89a2-57aa0b60b0c4 is in state STARTED 2025-09-11 00:59:58.003253 | orchestrator | 2025-09-11 00:59:58 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:00:01.041641 | orchestrator | 2025-09-11 01:00:01 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 01:00:01.041752 | orchestrator | 2025-09-11 01:00:01 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:00:01.042320 | orchestrator | 2025-09-11 01:00:01 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 01:00:01.042696 | orchestrator | 2025-09-11 01:00:01 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:00:01.043266 | orchestrator | 2025-09-11 01:00:01 | INFO  | Task 1428679f-9ece-42d3-89a2-57aa0b60b0c4 is in state STARTED 2025-09-11 01:00:01.043290 | orchestrator | 2025-09-11 01:00:01 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:00:04.073751 | orchestrator | 2025-09-11 01:00:04 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 01:00:04.075633 | orchestrator | 2025-09-11 01:00:04 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:00:04.076949 | orchestrator | 2025-09-11 01:00:04 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 01:00:04.079506 | orchestrator | 2025-09-11 01:00:04 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:00:04.080710 | orchestrator | 2025-09-11 01:00:04 | INFO  | Task 1428679f-9ece-42d3-89a2-57aa0b60b0c4 is in state STARTED 2025-09-11 01:00:04.080976 | orchestrator | 2025-09-11 01:00:04 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:00:07.113268 | orchestrator | 2025-09-11 01:00:07 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 01:00:07.113458 | orchestrator | 2025-09-11 01:00:07 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:00:07.114154 | orchestrator | 2025-09-11 01:00:07 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 01:00:07.114838 | orchestrator | 2025-09-11 01:00:07 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:00:07.115437 | orchestrator | 2025-09-11 01:00:07 | INFO  | Task 1428679f-9ece-42d3-89a2-57aa0b60b0c4 is in state STARTED 2025-09-11 01:00:07.115466 | orchestrator | 2025-09-11 01:00:07 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:00:10.141652 | orchestrator | 2025-09-11 01:00:10 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 01:00:10.141973 | orchestrator | 2025-09-11 01:00:10 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:00:10.142884 | orchestrator | 2025-09-11 01:00:10 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 01:00:10.143751 | orchestrator | 2025-09-11 01:00:10 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:00:10.144685 | orchestrator | 2025-09-11 01:00:10 | INFO  | Task 1428679f-9ece-42d3-89a2-57aa0b60b0c4 is in state STARTED 2025-09-11 01:00:10.144708 | orchestrator | 2025-09-11 01:00:10 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:00:13.178415 | orchestrator | 2025-09-11 01:00:13 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 01:00:13.179386 | orchestrator | 2025-09-11 01:00:13 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:00:13.180428 | orchestrator | 2025-09-11 01:00:13 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 01:00:13.182253 | orchestrator | 2025-09-11 01:00:13 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:00:13.184652 | orchestrator | 2025-09-11 01:00:13.184690 | orchestrator | 2025-09-11 01:00:13.184702 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-11 01:00:13.184714 | orchestrator | 2025-09-11 01:00:13.184726 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-11 01:00:13.184737 | orchestrator | Thursday 11 September 2025 00:58:41 +0000 (0:00:00.233) 0:00:00.233 **** 2025-09-11 01:00:13.184749 | orchestrator | ok: [testbed-node-0] 2025-09-11 01:00:13.184760 | orchestrator | ok: [testbed-node-1] 2025-09-11 01:00:13.184771 | orchestrator | ok: [testbed-node-2] 2025-09-11 01:00:13.184782 | orchestrator | ok: [testbed-manager] 2025-09-11 01:00:13.184792 | orchestrator | ok: [testbed-node-3] 2025-09-11 01:00:13.184803 | orchestrator | ok: [testbed-node-4] 2025-09-11 01:00:13.184814 | orchestrator | ok: [testbed-node-5] 2025-09-11 01:00:13.184825 | orchestrator | 2025-09-11 01:00:13.184836 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-11 01:00:13.184848 | orchestrator | Thursday 11 September 2025 00:58:41 +0000 (0:00:00.650) 0:00:00.883 **** 2025-09-11 01:00:13.184859 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-09-11 01:00:13.184871 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-09-11 01:00:13.184882 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-09-11 01:00:13.184893 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-09-11 01:00:13.184904 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-09-11 01:00:13.184914 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-09-11 01:00:13.184925 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-09-11 01:00:13.184936 | orchestrator | 2025-09-11 01:00:13.184947 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-11 01:00:13.184958 | orchestrator | 2025-09-11 01:00:13.184968 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-09-11 01:00:13.184979 | orchestrator | Thursday 11 September 2025 00:58:42 +0000 (0:00:00.688) 0:00:01.571 **** 2025-09-11 01:00:13.184991 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 01:00:13.185003 | orchestrator | 2025-09-11 01:00:13.185014 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-09-11 01:00:13.185039 | orchestrator | Thursday 11 September 2025 00:58:44 +0000 (0:00:01.761) 0:00:03.332 **** 2025-09-11 01:00:13.185050 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-09-11 01:00:13.185061 | orchestrator | 2025-09-11 01:00:13.185072 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-09-11 01:00:13.185083 | orchestrator | Thursday 11 September 2025 00:58:48 +0000 (0:00:04.111) 0:00:07.444 **** 2025-09-11 01:00:13.185094 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-09-11 01:00:13.185140 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-09-11 01:00:13.185152 | orchestrator | 2025-09-11 01:00:13.185163 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-09-11 01:00:13.185174 | orchestrator | Thursday 11 September 2025 00:58:54 +0000 (0:00:06.487) 0:00:13.931 **** 2025-09-11 01:00:13.185184 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-11 01:00:13.185195 | orchestrator | 2025-09-11 01:00:13.185206 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-09-11 01:00:13.185217 | orchestrator | Thursday 11 September 2025 00:58:57 +0000 (0:00:02.936) 0:00:16.867 **** 2025-09-11 01:00:13.185227 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-11 01:00:13.185240 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-09-11 01:00:13.185265 | orchestrator | 2025-09-11 01:00:13.185278 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-09-11 01:00:13.185290 | orchestrator | Thursday 11 September 2025 00:59:02 +0000 (0:00:04.274) 0:00:21.142 **** 2025-09-11 01:00:13.185304 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-11 01:00:13.185317 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-09-11 01:00:13.185330 | orchestrator | 2025-09-11 01:00:13.185342 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-09-11 01:00:13.185355 | orchestrator | Thursday 11 September 2025 00:59:08 +0000 (0:00:06.887) 0:00:28.029 **** 2025-09-11 01:00:13.185367 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-09-11 01:00:13.185379 | orchestrator | 2025-09-11 01:00:13.185393 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 01:00:13.185405 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 01:00:13.185418 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 01:00:13.185431 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 01:00:13.185444 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 01:00:13.185456 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 01:00:13.185481 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 01:00:13.185495 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 01:00:13.185508 | orchestrator | 2025-09-11 01:00:13.185520 | orchestrator | 2025-09-11 01:00:13.185533 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 01:00:13.185545 | orchestrator | Thursday 11 September 2025 00:59:13 +0000 (0:00:05.002) 0:00:33.032 **** 2025-09-11 01:00:13.185558 | orchestrator | =============================================================================== 2025-09-11 01:00:13.185570 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.89s 2025-09-11 01:00:13.185583 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.49s 2025-09-11 01:00:13.185594 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.00s 2025-09-11 01:00:13.185605 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.27s 2025-09-11 01:00:13.185615 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.11s 2025-09-11 01:00:13.185639 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.94s 2025-09-11 01:00:13.185650 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.76s 2025-09-11 01:00:13.185661 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.69s 2025-09-11 01:00:13.185672 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.65s 2025-09-11 01:00:13.185682 | orchestrator | 2025-09-11 01:00:13.185693 | orchestrator | 2025-09-11 01:00:13.185704 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-09-11 01:00:13.185714 | orchestrator | 2025-09-11 01:00:13.185725 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-09-11 01:00:13.185736 | orchestrator | Thursday 11 September 2025 00:58:34 +0000 (0:00:00.268) 0:00:00.268 **** 2025-09-11 01:00:13.185747 | orchestrator | changed: [testbed-manager] 2025-09-11 01:00:13.185758 | orchestrator | 2025-09-11 01:00:13.185768 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-09-11 01:00:13.185785 | orchestrator | Thursday 11 September 2025 00:58:36 +0000 (0:00:02.147) 0:00:02.415 **** 2025-09-11 01:00:13.185797 | orchestrator | changed: [testbed-manager] 2025-09-11 01:00:13.185807 | orchestrator | 2025-09-11 01:00:13.185823 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-09-11 01:00:13.185834 | orchestrator | Thursday 11 September 2025 00:58:37 +0000 (0:00:01.013) 0:00:03.429 **** 2025-09-11 01:00:13.185845 | orchestrator | changed: [testbed-manager] 2025-09-11 01:00:13.185856 | orchestrator | 2025-09-11 01:00:13.185867 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-09-11 01:00:13.185877 | orchestrator | Thursday 11 September 2025 00:58:38 +0000 (0:00:00.948) 0:00:04.377 **** 2025-09-11 01:00:13.185888 | orchestrator | changed: [testbed-manager] 2025-09-11 01:00:13.185899 | orchestrator | 2025-09-11 01:00:13.185910 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-09-11 01:00:13.185920 | orchestrator | Thursday 11 September 2025 00:58:40 +0000 (0:00:01.773) 0:00:06.150 **** 2025-09-11 01:00:13.185931 | orchestrator | changed: [testbed-manager] 2025-09-11 01:00:13.185942 | orchestrator | 2025-09-11 01:00:13.185953 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-09-11 01:00:13.185963 | orchestrator | Thursday 11 September 2025 00:58:40 +0000 (0:00:00.871) 0:00:07.022 **** 2025-09-11 01:00:13.185974 | orchestrator | changed: [testbed-manager] 2025-09-11 01:00:13.185984 | orchestrator | 2025-09-11 01:00:13.185995 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-09-11 01:00:13.186006 | orchestrator | Thursday 11 September 2025 00:58:41 +0000 (0:00:00.763) 0:00:07.786 **** 2025-09-11 01:00:13.186067 | orchestrator | changed: [testbed-manager] 2025-09-11 01:00:13.186079 | orchestrator | 2025-09-11 01:00:13.186089 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-09-11 01:00:13.186100 | orchestrator | Thursday 11 September 2025 00:58:43 +0000 (0:00:01.360) 0:00:09.146 **** 2025-09-11 01:00:13.186134 | orchestrator | changed: [testbed-manager] 2025-09-11 01:00:13.186145 | orchestrator | 2025-09-11 01:00:13.186156 | orchestrator | TASK [Create admin user] ******************************************************* 2025-09-11 01:00:13.186166 | orchestrator | Thursday 11 September 2025 00:58:43 +0000 (0:00:00.918) 0:00:10.065 **** 2025-09-11 01:00:13.186177 | orchestrator | changed: [testbed-manager] 2025-09-11 01:00:13.186187 | orchestrator | 2025-09-11 01:00:13.186198 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-09-11 01:00:13.186209 | orchestrator | Thursday 11 September 2025 00:59:48 +0000 (0:01:04.531) 0:01:14.596 **** 2025-09-11 01:00:13.186219 | orchestrator | skipping: [testbed-manager] 2025-09-11 01:00:13.186230 | orchestrator | 2025-09-11 01:00:13.186240 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-11 01:00:13.186251 | orchestrator | 2025-09-11 01:00:13.186262 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-11 01:00:13.186273 | orchestrator | Thursday 11 September 2025 00:59:48 +0000 (0:00:00.128) 0:01:14.724 **** 2025-09-11 01:00:13.186283 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:00:13.186294 | orchestrator | 2025-09-11 01:00:13.186305 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-11 01:00:13.186315 | orchestrator | 2025-09-11 01:00:13.186326 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-11 01:00:13.186337 | orchestrator | Thursday 11 September 2025 01:00:00 +0000 (0:00:11.723) 0:01:26.448 **** 2025-09-11 01:00:13.186347 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:00:13.186358 | orchestrator | 2025-09-11 01:00:13.186369 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-11 01:00:13.186379 | orchestrator | 2025-09-11 01:00:13.186390 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-11 01:00:13.186401 | orchestrator | Thursday 11 September 2025 01:00:01 +0000 (0:00:01.379) 0:01:27.828 **** 2025-09-11 01:00:13.186411 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:00:13.186430 | orchestrator | 2025-09-11 01:00:13.186448 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 01:00:13.186459 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-11 01:00:13.186470 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 01:00:13.186481 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 01:00:13.186492 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 01:00:13.186503 | orchestrator | 2025-09-11 01:00:13.186514 | orchestrator | 2025-09-11 01:00:13.186524 | orchestrator | 2025-09-11 01:00:13.186535 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 01:00:13.186546 | orchestrator | Thursday 11 September 2025 01:00:12 +0000 (0:00:11.223) 0:01:39.051 **** 2025-09-11 01:00:13.186557 | orchestrator | =============================================================================== 2025-09-11 01:00:13.186567 | orchestrator | Create admin user ------------------------------------------------------ 64.53s 2025-09-11 01:00:13.186578 | orchestrator | Restart ceph manager service ------------------------------------------- 24.33s 2025-09-11 01:00:13.186588 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.15s 2025-09-11 01:00:13.186599 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.77s 2025-09-11 01:00:13.186610 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.36s 2025-09-11 01:00:13.186620 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.01s 2025-09-11 01:00:13.186631 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.95s 2025-09-11 01:00:13.186641 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 0.92s 2025-09-11 01:00:13.186658 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.87s 2025-09-11 01:00:13.186669 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.76s 2025-09-11 01:00:13.186679 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.13s 2025-09-11 01:00:13.186690 | orchestrator | 2025-09-11 01:00:13 | INFO  | Task 1428679f-9ece-42d3-89a2-57aa0b60b0c4 is in state SUCCESS 2025-09-11 01:00:13.186701 | orchestrator | 2025-09-11 01:00:13 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:00:16.212584 | orchestrator | 2025-09-11 01:00:16 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 01:00:16.214435 | orchestrator | 2025-09-11 01:00:16 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:00:16.216417 | orchestrator | 2025-09-11 01:00:16 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 01:00:16.218966 | orchestrator | 2025-09-11 01:00:16 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:00:16.219199 | orchestrator | 2025-09-11 01:00:16 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:00:19.253716 | orchestrator | 2025-09-11 01:00:19 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 01:00:19.254601 | orchestrator | 2025-09-11 01:00:19 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:00:19.256288 | orchestrator | 2025-09-11 01:00:19 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 01:00:19.257221 | orchestrator | 2025-09-11 01:00:19 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:00:19.257246 | orchestrator | 2025-09-11 01:00:19 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:00:22.292131 | orchestrator | 2025-09-11 01:00:22 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 01:00:22.293361 | orchestrator | 2025-09-11 01:00:22 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:00:22.295658 | orchestrator | 2025-09-11 01:00:22 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 01:00:22.298088 | orchestrator | 2025-09-11 01:00:22 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:00:22.298733 | orchestrator | 2025-09-11 01:00:22 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:00:25.349471 | orchestrator | 2025-09-11 01:00:25 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 01:00:25.350437 | orchestrator | 2025-09-11 01:00:25 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:00:25.354442 | orchestrator | 2025-09-11 01:00:25 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 01:00:25.357433 | orchestrator | 2025-09-11 01:00:25 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:00:25.357849 | orchestrator | 2025-09-11 01:00:25 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:00:28.393849 | orchestrator | 2025-09-11 01:00:28 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 01:00:28.394136 | orchestrator | 2025-09-11 01:00:28 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:00:28.395008 | orchestrator | 2025-09-11 01:00:28 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 01:00:28.397377 | orchestrator | 2025-09-11 01:00:28 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:00:28.397463 | orchestrator | 2025-09-11 01:00:28 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:00:31.433674 | orchestrator | 2025-09-11 01:00:31 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 01:00:31.434005 | orchestrator | 2025-09-11 01:00:31 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:00:31.434957 | orchestrator | 2025-09-11 01:00:31 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 01:00:31.435967 | orchestrator | 2025-09-11 01:00:31 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:00:31.435990 | orchestrator | 2025-09-11 01:00:31 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:00:34.464824 | orchestrator | 2025-09-11 01:00:34 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 01:00:34.465020 | orchestrator | 2025-09-11 01:00:34 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:00:34.465921 | orchestrator | 2025-09-11 01:00:34 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 01:00:34.467996 | orchestrator | 2025-09-11 01:00:34 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:00:34.468021 | orchestrator | 2025-09-11 01:00:34 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:00:37.495436 | orchestrator | 2025-09-11 01:00:37 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 01:00:37.496528 | orchestrator | 2025-09-11 01:00:37 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:00:37.497876 | orchestrator | 2025-09-11 01:00:37 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 01:00:37.499276 | orchestrator | 2025-09-11 01:00:37 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:00:37.499517 | orchestrator | 2025-09-11 01:00:37 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:00:40.543591 | orchestrator | 2025-09-11 01:00:40 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 01:00:40.548009 | orchestrator | 2025-09-11 01:00:40 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:00:40.549781 | orchestrator | 2025-09-11 01:00:40 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 01:00:40.550687 | orchestrator | 2025-09-11 01:00:40 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:00:40.550719 | orchestrator | 2025-09-11 01:00:40 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:00:43.589748 | orchestrator | 2025-09-11 01:00:43 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 01:00:43.591686 | orchestrator | 2025-09-11 01:00:43 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:00:43.593448 | orchestrator | 2025-09-11 01:00:43 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 01:00:43.595588 | orchestrator | 2025-09-11 01:00:43 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:00:43.595787 | orchestrator | 2025-09-11 01:00:43 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:00:46.636627 | orchestrator | 2025-09-11 01:00:46 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 01:00:46.638490 | orchestrator | 2025-09-11 01:00:46 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:00:46.640473 | orchestrator | 2025-09-11 01:00:46 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 01:00:46.642251 | orchestrator | 2025-09-11 01:00:46 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:00:46.642601 | orchestrator | 2025-09-11 01:00:46 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:00:49.686669 | orchestrator | 2025-09-11 01:00:49 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 01:00:49.688834 | orchestrator | 2025-09-11 01:00:49 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:00:49.691364 | orchestrator | 2025-09-11 01:00:49 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 01:00:49.692986 | orchestrator | 2025-09-11 01:00:49 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:00:49.693262 | orchestrator | 2025-09-11 01:00:49 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:00:52.718863 | orchestrator | 2025-09-11 01:00:52 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 01:00:52.718959 | orchestrator | 2025-09-11 01:00:52 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:00:52.719394 | orchestrator | 2025-09-11 01:00:52 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 01:00:52.719956 | orchestrator | 2025-09-11 01:00:52 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:00:52.719981 | orchestrator | 2025-09-11 01:00:52 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:00:55.761770 | orchestrator | 2025-09-11 01:00:55 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 01:00:55.765575 | orchestrator | 2025-09-11 01:00:55 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:00:55.768919 | orchestrator | 2025-09-11 01:00:55 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 01:00:55.770398 | orchestrator | 2025-09-11 01:00:55 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:00:55.771024 | orchestrator | 2025-09-11 01:00:55 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:00:58.814219 | orchestrator | 2025-09-11 01:00:58 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 01:00:58.814759 | orchestrator | 2025-09-11 01:00:58 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:00:58.815471 | orchestrator | 2025-09-11 01:00:58 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 01:00:58.816522 | orchestrator | 2025-09-11 01:00:58 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:00:58.816545 | orchestrator | 2025-09-11 01:00:58 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:01:01.854588 | orchestrator | 2025-09-11 01:01:01 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 01:01:01.854945 | orchestrator | 2025-09-11 01:01:01 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:01:01.855978 | orchestrator | 2025-09-11 01:01:01 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 01:01:01.856380 | orchestrator | 2025-09-11 01:01:01 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:01:01.856404 | orchestrator | 2025-09-11 01:01:01 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:01:04.880582 | orchestrator | 2025-09-11 01:01:04 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 01:01:04.881341 | orchestrator | 2025-09-11 01:01:04 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:01:04.882708 | orchestrator | 2025-09-11 01:01:04 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 01:01:04.883760 | orchestrator | 2025-09-11 01:01:04 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:01:04.883784 | orchestrator | 2025-09-11 01:01:04 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:01:07.942649 | orchestrator | 2025-09-11 01:01:07 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 01:01:07.944501 | orchestrator | 2025-09-11 01:01:07 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:01:07.944529 | orchestrator | 2025-09-11 01:01:07 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 01:01:07.945653 | orchestrator | 2025-09-11 01:01:07 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:01:07.945673 | orchestrator | 2025-09-11 01:01:07 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:01:10.986123 | orchestrator | 2025-09-11 01:01:10 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 01:01:10.987037 | orchestrator | 2025-09-11 01:01:10 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:01:10.988242 | orchestrator | 2025-09-11 01:01:10 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 01:01:10.990352 | orchestrator | 2025-09-11 01:01:10 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:01:10.990385 | orchestrator | 2025-09-11 01:01:10 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:01:14.029929 | orchestrator | 2025-09-11 01:01:14 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 01:01:14.031413 | orchestrator | 2025-09-11 01:01:14 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:01:14.032390 | orchestrator | 2025-09-11 01:01:14 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 01:01:14.033945 | orchestrator | 2025-09-11 01:01:14 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:01:14.033983 | orchestrator | 2025-09-11 01:01:14 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:01:17.073839 | orchestrator | 2025-09-11 01:01:17 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 01:01:17.076021 | orchestrator | 2025-09-11 01:01:17 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:01:17.077400 | orchestrator | 2025-09-11 01:01:17 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 01:01:17.079324 | orchestrator | 2025-09-11 01:01:17 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:01:17.079357 | orchestrator | 2025-09-11 01:01:17 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:01:20.117417 | orchestrator | 2025-09-11 01:01:20 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 01:01:20.118850 | orchestrator | 2025-09-11 01:01:20 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:01:20.121006 | orchestrator | 2025-09-11 01:01:20 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 01:01:20.123752 | orchestrator | 2025-09-11 01:01:20 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:01:20.124314 | orchestrator | 2025-09-11 01:01:20 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:01:23.157985 | orchestrator | 2025-09-11 01:01:23 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state STARTED 2025-09-11 01:01:23.158144 | orchestrator | 2025-09-11 01:01:23 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:01:23.158161 | orchestrator | 2025-09-11 01:01:23 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 01:01:23.158172 | orchestrator | 2025-09-11 01:01:23 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:01:23.158184 | orchestrator | 2025-09-11 01:01:23 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:01:26.200147 | orchestrator | 2025-09-11 01:01:26.200237 | orchestrator | 2025-09-11 01:01:26.200252 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-11 01:01:26.200265 | orchestrator | 2025-09-11 01:01:26.200276 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-11 01:01:26.200288 | orchestrator | Thursday 11 September 2025 00:58:41 +0000 (0:00:00.249) 0:00:00.250 **** 2025-09-11 01:01:26.200299 | orchestrator | ok: [testbed-node-0] 2025-09-11 01:01:26.200311 | orchestrator | ok: [testbed-node-1] 2025-09-11 01:01:26.200322 | orchestrator | ok: [testbed-node-2] 2025-09-11 01:01:26.200333 | orchestrator | 2025-09-11 01:01:26.200344 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-11 01:01:26.200355 | orchestrator | Thursday 11 September 2025 00:58:41 +0000 (0:00:00.289) 0:00:00.539 **** 2025-09-11 01:01:26.200366 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-09-11 01:01:26.200377 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-09-11 01:01:26.200389 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-09-11 01:01:26.200422 | orchestrator | 2025-09-11 01:01:26.200434 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-09-11 01:01:26.200445 | orchestrator | 2025-09-11 01:01:26.200456 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-11 01:01:26.200467 | orchestrator | Thursday 11 September 2025 00:58:41 +0000 (0:00:00.357) 0:00:00.897 **** 2025-09-11 01:01:26.200477 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 01:01:26.200488 | orchestrator | 2025-09-11 01:01:26.200499 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-09-11 01:01:26.200510 | orchestrator | Thursday 11 September 2025 00:58:42 +0000 (0:00:00.493) 0:00:01.390 **** 2025-09-11 01:01:26.200521 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-09-11 01:01:26.200531 | orchestrator | 2025-09-11 01:01:26.200542 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-09-11 01:01:26.200553 | orchestrator | Thursday 11 September 2025 00:58:46 +0000 (0:00:04.276) 0:00:05.667 **** 2025-09-11 01:01:26.200564 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-09-11 01:01:26.200575 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-09-11 01:01:26.200585 | orchestrator | 2025-09-11 01:01:26.200596 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-09-11 01:01:26.200607 | orchestrator | Thursday 11 September 2025 00:58:53 +0000 (0:00:06.926) 0:00:12.593 **** 2025-09-11 01:01:26.200618 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-09-11 01:01:26.200629 | orchestrator | 2025-09-11 01:01:26.200642 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-09-11 01:01:26.200654 | orchestrator | Thursday 11 September 2025 00:58:56 +0000 (0:00:03.295) 0:00:15.889 **** 2025-09-11 01:01:26.200667 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-11 01:01:26.200680 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-09-11 01:01:26.200692 | orchestrator | 2025-09-11 01:01:26.200705 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-09-11 01:01:26.200718 | orchestrator | Thursday 11 September 2025 00:59:01 +0000 (0:00:04.637) 0:00:20.527 **** 2025-09-11 01:01:26.200730 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-11 01:01:26.200743 | orchestrator | 2025-09-11 01:01:26.200755 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-09-11 01:01:26.200767 | orchestrator | Thursday 11 September 2025 00:59:04 +0000 (0:00:03.583) 0:00:24.118 **** 2025-09-11 01:01:26.200780 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-09-11 01:01:26.200793 | orchestrator | 2025-09-11 01:01:26.200805 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-09-11 01:01:26.200831 | orchestrator | Thursday 11 September 2025 00:59:09 +0000 (0:00:04.649) 0:00:28.768 **** 2025-09-11 01:01:26.200869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-11 01:01:26.200898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-11 01:01:26.200914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-11 01:01:26.200934 | orchestrator | 2025-09-11 01:01:26.200947 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-11 01:01:26.200960 | orchestrator | Thursday 11 September 2025 00:59:12 +0000 (0:00:02.842) 0:00:31.610 **** 2025-09-11 01:01:26.200973 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 01:01:26.200986 | orchestrator | 2025-09-11 01:01:26.201005 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-09-11 01:01:26.201016 | orchestrator | Thursday 11 September 2025 00:59:13 +0000 (0:00:00.596) 0:00:32.206 **** 2025-09-11 01:01:26.201027 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:01:26.201038 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:01:26.201049 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:01:26.201059 | orchestrator | 2025-09-11 01:01:26.201070 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-09-11 01:01:26.201081 | orchestrator | Thursday 11 September 2025 00:59:16 +0000 (0:00:03.639) 0:00:35.845 **** 2025-09-11 01:01:26.201117 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-11 01:01:26.201128 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-11 01:01:26.201139 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-11 01:01:26.201150 | orchestrator | 2025-09-11 01:01:26.201161 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-09-11 01:01:26.201172 | orchestrator | Thursday 11 September 2025 00:59:18 +0000 (0:00:01.500) 0:00:37.346 **** 2025-09-11 01:01:26.201183 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-11 01:01:26.201193 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-11 01:01:26.201205 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-11 01:01:26.201215 | orchestrator | 2025-09-11 01:01:26.201253 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-09-11 01:01:26.201266 | orchestrator | Thursday 11 September 2025 00:59:19 +0000 (0:00:01.045) 0:00:38.392 **** 2025-09-11 01:01:26.201277 | orchestrator | ok: [testbed-node-0] 2025-09-11 01:01:26.201287 | orchestrator | ok: [testbed-node-1] 2025-09-11 01:01:26.201298 | orchestrator | ok: [testbed-node-2] 2025-09-11 01:01:26.201309 | orchestrator | 2025-09-11 01:01:26.201319 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-09-11 01:01:26.201330 | orchestrator | Thursday 11 September 2025 00:59:19 +0000 (0:00:00.596) 0:00:38.988 **** 2025-09-11 01:01:26.201341 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:01:26.201351 | orchestrator | 2025-09-11 01:01:26.201362 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-09-11 01:01:26.201373 | orchestrator | Thursday 11 September 2025 00:59:20 +0000 (0:00:00.228) 0:00:39.217 **** 2025-09-11 01:01:26.201383 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:01:26.201394 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:01:26.201404 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:01:26.201415 | orchestrator | 2025-09-11 01:01:26.201426 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-11 01:01:26.201437 | orchestrator | Thursday 11 September 2025 00:59:20 +0000 (0:00:00.239) 0:00:39.456 **** 2025-09-11 01:01:26.201447 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 01:01:26.201458 | orchestrator | 2025-09-11 01:01:26.201469 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-09-11 01:01:26.201479 | orchestrator | Thursday 11 September 2025 00:59:20 +0000 (0:00:00.462) 0:00:39.919 **** 2025-09-11 01:01:26.201536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-11 01:01:26.201551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-11 01:01:26.201569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-11 01:01:26.201588 | orchestrator | 2025-09-11 01:01:26.201599 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-09-11 01:01:26.201610 | orchestrator | Thursday 11 September 2025 00:59:26 +0000 (0:00:05.716) 0:00:45.636 **** 2025-09-11 01:01:26.201630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-11 01:01:26.201643 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:01:26.201659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-11 01:01:26.201677 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:01:26.201697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-11 01:01:26.201709 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:01:26.201720 | orchestrator | 2025-09-11 01:01:26.201730 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-09-11 01:01:26.201741 | orchestrator | Thursday 11 September 2025 00:59:29 +0000 (0:00:03.230) 0:00:48.867 **** 2025-09-11 01:01:26.201753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-11 01:01:26.201770 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:01:26.201793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-11 01:01:26.201806 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:01:26.201817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-11 01:01:26.201841 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:01:26.201852 | orchestrator | 2025-09-11 01:01:26.201863 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-09-11 01:01:26.201874 | orchestrator | Thursday 11 September 2025 00:59:33 +0000 (0:00:03.607) 0:00:52.475 **** 2025-09-11 01:01:26.201885 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:01:26.201895 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:01:26.201906 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:01:26.201917 | orchestrator | 2025-09-11 01:01:26.201928 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-09-11 01:01:26.201938 | orchestrator | Thursday 11 September 2025 00:59:36 +0000 (0:00:03.179) 0:00:55.655 **** 2025-09-11 01:01:26.201959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-11 01:01:26.201973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-11 01:01:26.201996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-11 01:01:26.202008 | orchestrator | 2025-09-11 01:01:26.202108 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-09-11 01:01:26.202125 | orchestrator | Thursday 11 September 2025 00:59:40 +0000 (0:00:04.112) 0:00:59.767 **** 2025-09-11 01:01:26.202136 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:01:26.202147 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:01:26.202158 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:01:26.202169 | orchestrator | 2025-09-11 01:01:26.202179 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-09-11 01:01:26.202190 | orchestrator | Thursday 11 September 2025 00:59:46 +0000 (0:00:06.136) 0:01:05.903 **** 2025-09-11 01:01:26.202201 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:01:26.202212 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:01:26.202222 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:01:26.202233 | orchestrator | 2025-09-11 01:01:26.202244 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-09-11 01:01:26.202262 | orchestrator | Thursday 2025-09-11 01:01:26 | INFO  | Task e76d5d42-5663-45f7-a4c6-f1bc57baf35b is in state SUCCESS 2025-09-11 01:01:26.202407 | orchestrator | 11 September 2025 00:59:51 +0000 (0:00:04.736) 0:01:10.640 **** 2025-09-11 01:01:26.202483 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:01:26.202498 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:01:26.202509 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:01:26.202520 | orchestrator | 2025-09-11 01:01:26.202532 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-09-11 01:01:26.202543 | orchestrator | Thursday 11 September 2025 00:59:56 +0000 (0:00:05.057) 0:01:15.698 **** 2025-09-11 01:01:26.202554 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:01:26.202564 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:01:26.202576 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:01:26.202586 | orchestrator | 2025-09-11 01:01:26.202597 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-09-11 01:01:26.202632 | orchestrator | Thursday 11 September 2025 01:00:02 +0000 (0:00:05.513) 0:01:21.212 **** 2025-09-11 01:01:26.202644 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:01:26.202655 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:01:26.202665 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:01:26.202676 | orchestrator | 2025-09-11 01:01:26.202687 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-09-11 01:01:26.202698 | orchestrator | Thursday 11 September 2025 01:00:04 +0000 (0:00:02.901) 0:01:24.114 **** 2025-09-11 01:01:26.202709 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:01:26.202720 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:01:26.202730 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:01:26.202741 | orchestrator | 2025-09-11 01:01:26.202752 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-09-11 01:01:26.202763 | orchestrator | Thursday 11 September 2025 01:00:05 +0000 (0:00:00.399) 0:01:24.514 **** 2025-09-11 01:01:26.202774 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-11 01:01:26.202785 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:01:26.202796 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-11 01:01:26.202807 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:01:26.202818 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-11 01:01:26.202828 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:01:26.202839 | orchestrator | 2025-09-11 01:01:26.202850 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-09-11 01:01:26.202861 | orchestrator | Thursday 11 September 2025 01:00:09 +0000 (0:00:04.035) 0:01:28.549 **** 2025-09-11 01:01:26.202889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-11 01:01:26.202926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-11 01:01:26.202954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-11 01:01:26.202967 | orchestrator | 2025-09-11 01:01:26.202981 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-11 01:01:26.202995 | orchestrator | Thursday 11 September 2025 01:00:13 +0000 (0:00:03.827) 0:01:32.377 **** 2025-09-11 01:01:26.203008 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:01:26.203020 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:01:26.203032 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:01:26.203045 | orchestrator | 2025-09-11 01:01:26.203059 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-09-11 01:01:26.203072 | orchestrator | Thursday 11 September 2025 01:00:13 +0000 (0:00:00.249) 0:01:32.626 **** 2025-09-11 01:01:26.203108 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:01:26.203121 | orchestrator | 2025-09-11 01:01:26.203134 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-09-11 01:01:26.203147 | orchestrator | Thursday 11 September 2025 01:00:15 +0000 (0:00:02.335) 0:01:34.961 **** 2025-09-11 01:01:26.203160 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:01:26.203173 | orchestrator | 2025-09-11 01:01:26.203191 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-09-11 01:01:26.203204 | orchestrator | Thursday 11 September 2025 01:00:18 +0000 (0:00:02.306) 0:01:37.267 **** 2025-09-11 01:01:26.203217 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:01:26.203230 | orchestrator | 2025-09-11 01:01:26.203242 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-09-11 01:01:26.203255 | orchestrator | Thursday 11 September 2025 01:00:20 +0000 (0:00:02.124) 0:01:39.392 **** 2025-09-11 01:01:26.203268 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:01:26.203282 | orchestrator | 2025-09-11 01:01:26.203294 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-09-11 01:01:26.203306 | orchestrator | Thursday 11 September 2025 01:00:45 +0000 (0:00:25.342) 0:02:04.735 **** 2025-09-11 01:01:26.203317 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:01:26.203328 | orchestrator | 2025-09-11 01:01:26.203346 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-11 01:01:26.203358 | orchestrator | Thursday 11 September 2025 01:00:47 +0000 (0:00:02.071) 0:02:06.806 **** 2025-09-11 01:01:26.203369 | orchestrator | 2025-09-11 01:01:26.203380 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-11 01:01:26.203391 | orchestrator | Thursday 11 September 2025 01:00:47 +0000 (0:00:00.060) 0:02:06.866 **** 2025-09-11 01:01:26.203401 | orchestrator | 2025-09-11 01:01:26.203412 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-11 01:01:26.203423 | orchestrator | Thursday 11 September 2025 01:00:47 +0000 (0:00:00.059) 0:02:06.926 **** 2025-09-11 01:01:26.203434 | orchestrator | 2025-09-11 01:01:26.203445 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-09-11 01:01:26.203455 | orchestrator | Thursday 11 September 2025 01:00:47 +0000 (0:00:00.059) 0:02:06.985 **** 2025-09-11 01:01:26.203466 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:01:26.203477 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:01:26.203488 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:01:26.203499 | orchestrator | 2025-09-11 01:01:26.203510 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 01:01:26.203521 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-11 01:01:26.203533 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-11 01:01:26.203544 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-11 01:01:26.203555 | orchestrator | 2025-09-11 01:01:26.203566 | orchestrator | 2025-09-11 01:01:26.203577 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 01:01:26.203588 | orchestrator | Thursday 11 September 2025 01:01:23 +0000 (0:00:35.613) 0:02:42.598 **** 2025-09-11 01:01:26.203599 | orchestrator | =============================================================================== 2025-09-11 01:01:26.203610 | orchestrator | glance : Restart glance-api container ---------------------------------- 35.61s 2025-09-11 01:01:26.203620 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 25.34s 2025-09-11 01:01:26.203631 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.93s 2025-09-11 01:01:26.203642 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.14s 2025-09-11 01:01:26.203653 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.72s 2025-09-11 01:01:26.203664 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 5.51s 2025-09-11 01:01:26.203675 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 5.06s 2025-09-11 01:01:26.203686 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.74s 2025-09-11 01:01:26.203703 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.65s 2025-09-11 01:01:26.203714 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.64s 2025-09-11 01:01:26.203725 | orchestrator | service-ks-register : glance | Creating services ------------------------ 4.28s 2025-09-11 01:01:26.203736 | orchestrator | glance : Copying over config.json files for services -------------------- 4.11s 2025-09-11 01:01:26.203747 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 4.04s 2025-09-11 01:01:26.203762 | orchestrator | glance : Check glance containers ---------------------------------------- 3.83s 2025-09-11 01:01:26.203773 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.64s 2025-09-11 01:01:26.203784 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.61s 2025-09-11 01:01:26.203796 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.59s 2025-09-11 01:01:26.203806 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.30s 2025-09-11 01:01:26.203817 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.23s 2025-09-11 01:01:26.203828 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.18s 2025-09-11 01:01:26.203839 | orchestrator | 2025-09-11 01:01:26 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:01:26.203850 | orchestrator | 2025-09-11 01:01:26 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:01:26.205671 | orchestrator | 2025-09-11 01:01:26 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 01:01:26.206680 | orchestrator | 2025-09-11 01:01:26 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:01:26.206708 | orchestrator | 2025-09-11 01:01:26 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:01:29.255927 | orchestrator | 2025-09-11 01:01:29 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:01:29.256691 | orchestrator | 2025-09-11 01:01:29 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:01:29.259002 | orchestrator | 2025-09-11 01:01:29 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 01:01:29.259034 | orchestrator | 2025-09-11 01:01:29 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:01:29.259046 | orchestrator | 2025-09-11 01:01:29 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:01:32.293672 | orchestrator | 2025-09-11 01:01:32 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:01:32.295346 | orchestrator | 2025-09-11 01:01:32 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:01:32.296112 | orchestrator | 2025-09-11 01:01:32 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 01:01:32.297018 | orchestrator | 2025-09-11 01:01:32 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:01:32.297290 | orchestrator | 2025-09-11 01:01:32 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:01:35.328417 | orchestrator | 2025-09-11 01:01:35 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:01:35.329768 | orchestrator | 2025-09-11 01:01:35 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:01:35.331099 | orchestrator | 2025-09-11 01:01:35 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 01:01:35.332188 | orchestrator | 2025-09-11 01:01:35 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:01:35.332208 | orchestrator | 2025-09-11 01:01:35 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:01:38.370949 | orchestrator | 2025-09-11 01:01:38 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:01:38.372895 | orchestrator | 2025-09-11 01:01:38 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:01:38.375526 | orchestrator | 2025-09-11 01:01:38 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state STARTED 2025-09-11 01:01:38.377221 | orchestrator | 2025-09-11 01:01:38 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:01:38.377245 | orchestrator | 2025-09-11 01:01:38 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:01:41.425204 | orchestrator | 2025-09-11 01:01:41 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:01:41.426731 | orchestrator | 2025-09-11 01:01:41 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:01:41.430788 | orchestrator | 2025-09-11 01:01:41 | INFO  | Task ab4a9b75-dd5e-4ceb-861e-cca9f19887cc is in state SUCCESS 2025-09-11 01:01:41.432709 | orchestrator | 2025-09-11 01:01:41.432742 | orchestrator | 2025-09-11 01:01:41.432754 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-11 01:01:41.432765 | orchestrator | 2025-09-11 01:01:41.432776 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-11 01:01:41.432788 | orchestrator | Thursday 11 September 2025 00:58:33 +0000 (0:00:00.291) 0:00:00.291 **** 2025-09-11 01:01:41.432799 | orchestrator | ok: [testbed-manager] 2025-09-11 01:01:41.432811 | orchestrator | ok: [testbed-node-0] 2025-09-11 01:01:41.432934 | orchestrator | ok: [testbed-node-1] 2025-09-11 01:01:41.432950 | orchestrator | ok: [testbed-node-2] 2025-09-11 01:01:41.432961 | orchestrator | ok: [testbed-node-3] 2025-09-11 01:01:41.432972 | orchestrator | ok: [testbed-node-4] 2025-09-11 01:01:41.432983 | orchestrator | ok: [testbed-node-5] 2025-09-11 01:01:41.433036 | orchestrator | 2025-09-11 01:01:41.433049 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-11 01:01:41.433060 | orchestrator | Thursday 11 September 2025 00:58:34 +0000 (0:00:00.901) 0:00:01.193 **** 2025-09-11 01:01:41.433071 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-09-11 01:01:41.433111 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-09-11 01:01:41.433171 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-09-11 01:01:41.433258 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-09-11 01:01:41.433271 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-09-11 01:01:41.433282 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-09-11 01:01:41.433292 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-09-11 01:01:41.433303 | orchestrator | 2025-09-11 01:01:41.433316 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-09-11 01:01:41.433329 | orchestrator | 2025-09-11 01:01:41.433341 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-11 01:01:41.433354 | orchestrator | Thursday 11 September 2025 00:58:35 +0000 (0:00:00.744) 0:00:01.937 **** 2025-09-11 01:01:41.433367 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 01:01:41.433382 | orchestrator | 2025-09-11 01:01:41.433395 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-09-11 01:01:41.433407 | orchestrator | Thursday 11 September 2025 00:58:37 +0000 (0:00:01.877) 0:00:03.815 **** 2025-09-11 01:01:41.433423 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-11 01:01:41.433467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-11 01:01:41.433481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-11 01:01:41.433495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-11 01:01:41.433528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.433544 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-11 01:01:41.433619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.433643 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-11 01:01:41.433675 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-11 01:01:41.433689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.433701 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.433713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.433735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.433753 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.433765 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-11 01:01:41.433776 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.433796 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-11 01:01:41.433811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.433853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.433877 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.433952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.433965 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.433993 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.434005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.434209 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.434228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.434240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.434266 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.434279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.434290 | orchestrator | 2025-09-11 01:01:41.434302 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-11 01:01:41.434322 | orchestrator | Thursday 11 September 2025 00:58:40 +0000 (0:00:03.469) 0:00:07.285 **** 2025-09-11 01:01:41.434333 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 01:01:41.434406 | orchestrator | 2025-09-11 01:01:41.434420 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-09-11 01:01:41.434431 | orchestrator | Thursday 11 September 2025 00:58:42 +0000 (0:00:01.212) 0:00:08.497 **** 2025-09-11 01:01:41.434442 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-11 01:01:41.434454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-11 01:01:41.434466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-11 01:01:41.434477 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-11 01:01:41.434496 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-11 01:01:41.434513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-11 01:01:41.434545 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-11 01:01:41.434557 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-11 01:01:41.434584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.434606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.434618 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.434665 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.434686 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.434703 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.434722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.434733 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-11 01:01:41.434746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.434758 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.434769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.434818 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.434835 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.434854 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.434866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.434877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.434888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.434900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.434911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.434928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.434951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.434962 | orchestrator | 2025-09-11 01:01:41.434974 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-09-11 01:01:41.434985 | orchestrator | Thursday 11 September 2025 00:58:48 +0000 (0:00:06.219) 0:00:14.716 **** 2025-09-11 01:01:41.434996 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-11 01:01:41.435008 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-11 01:01:41.435019 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-11 01:01:41.435031 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-11 01:01:41.435048 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 01:01:41.435071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-11 01:01:41.435114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 01:01:41.435127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 01:01:41.435138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-11 01:01:41.435150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 01:01:41.435161 | orchestrator | skipping: [testbed-manager] 2025-09-11 01:01:41.435172 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:01:41.435184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-11 01:01:41.435195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 01:01:41.435222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 01:01:41.435239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-11 01:01:41.435251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 01:01:41.435262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-11 01:01:41.435274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 01:01:41.435285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 01:01:41.435297 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:01:41.435308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-11 01:01:41.435320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 01:01:41.435341 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:01:41.435369 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-11 01:01:41.435386 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-11 01:01:41.435398 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-11 01:01:41.435409 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:01:41.435420 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-11 01:01:41.435431 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-11 01:01:41.435443 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-11 01:01:41.435454 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:01:41.435465 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-11 01:01:41.435483 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-11 01:01:41.435507 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-11 01:01:41.435519 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:01:41.435531 | orchestrator | 2025-09-11 01:01:41.435542 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-09-11 01:01:41.435553 | orchestrator | Thursday 11 September 2025 00:58:49 +0000 (0:00:01.277) 0:00:15.994 **** 2025-09-11 01:01:41.435564 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-11 01:01:41.435576 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-11 01:01:41.435588 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-11 01:01:41.435599 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-11 01:01:41.435618 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 01:01:41.435629 | orchestrator | skipping: [testbed-manager] 2025-09-11 01:01:41.435652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-11 01:01:41.435665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 01:01:41.435676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 01:01:41.435687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-11 01:01:41.435699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 01:01:41.435710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-11 01:01:41.435732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 01:01:41.435744 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:01:41.435755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 01:01:41.435779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-11 01:01:41.435791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 01:01:41.435802 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:01:41.435813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-11 01:01:41.435824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 01:01:41.435836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 01:01:41.435854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-11 01:01:41.435865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-11 01:01:41.435877 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:01:41.435893 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-11 01:01:41.435909 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-11 01:01:41.435921 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-11 01:01:41.435932 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:01:41.435943 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-11 01:01:41.435955 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-11 01:01:41.435972 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-11 01:01:41.435983 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:01:41.435994 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-11 01:01:41.436005 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-11 01:01:41.436024 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-11 01:01:41.436035 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:01:41.436046 | orchestrator | 2025-09-11 01:01:41.436062 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-09-11 01:01:41.436074 | orchestrator | Thursday 11 September 2025 00:58:51 +0000 (0:00:01.724) 0:00:17.718 **** 2025-09-11 01:01:41.436140 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-11 01:01:41.436152 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-11 01:01:41.436171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-11 01:01:41.436182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-11 01:01:41.436194 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-11 01:01:41.436205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-11 01:01:41.436222 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-11 01:01:41.436239 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-11 01:01:41.436250 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.436262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.436280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.436291 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.436303 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.436314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.436331 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.436349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.436360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.436369 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.436386 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.436396 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-11 01:01:41.436407 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.436424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.436439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.436449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.436465 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.436475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.436485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.436495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.436505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.436515 | orchestrator | 2025-09-11 01:01:41.436525 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-09-11 01:01:41.436535 | orchestrator | Thursday 11 September 2025 00:58:56 +0000 (0:00:05.167) 0:00:22.886 **** 2025-09-11 01:01:41.436545 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-11 01:01:41.436554 | orchestrator | 2025-09-11 01:01:41.436564 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-09-11 01:01:41.436579 | orchestrator | Thursday 11 September 2025 00:58:57 +0000 (0:00:00.963) 0:00:23.850 **** 2025-09-11 01:01:41.436593 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1101193, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3456063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.436604 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1101193, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3456063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.436620 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1101193, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3456063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.436630 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1101193, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3456063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.436641 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1101215, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3533547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.436652 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1101193, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3456063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.436667 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1101193, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3456063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.436682 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1101215, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3533547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.436698 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1101215, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3533547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.436708 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1101215, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3533547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.436718 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1101215, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3533547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.436728 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1101184, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3446832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.436738 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1101184, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3446832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.436753 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1101184, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3446832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.436768 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1101215, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3533547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.436784 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1101211, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3514552, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.436794 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1101184, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3446832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.436804 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1101211, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3514552, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.436814 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1101193, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3456063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-11 01:01:41.436824 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1101211, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3514552, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.436839 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1101184, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3446832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.436853 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1101178, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3421876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.436869 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1101184, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3446832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.436880 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1101178, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3421876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.436890 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1101211, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3514552, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.436899 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1101195, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3456833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.436910 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1101211, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3514552, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.436919 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1101209, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3506832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.436945 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1101195, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3456833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.436955 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1101178, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3421876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.436965 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1101178, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3421876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.436975 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1101197, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3467228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.436985 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1101209, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3506832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.436995 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1101195, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3456833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437005 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1101195, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3456833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437034 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1101211, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3514552, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437045 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1101209, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3506832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437055 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1101192, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3446832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437065 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1101197, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3467228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437075 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1101178, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3421876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437103 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1101209, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3506832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437114 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1101197, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3467228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437140 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1101192, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3446832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437151 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1101178, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3421876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437161 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101214, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3526893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437171 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1101195, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3456833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437181 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1101215, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3533547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-11 01:01:41.437191 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1101192, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3446832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437207 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101214, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3526893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437306 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1101197, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3467228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437320 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101173, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3415296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437330 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1101195, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3456833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437340 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1101192, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3446832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437349 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1101209, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3506832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437359 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101214, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3526893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437376 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101173, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3415296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437396 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101214, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3526893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437406 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1050601, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3863492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437416 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1101197, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3467228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437426 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1050601, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3863492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437436 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1101209, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3506832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437446 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101173, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3415296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437463 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1101192, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3446832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437482 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101173, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3415296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437493 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1101213, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3520706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437503 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1101213, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3520706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437513 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1050601, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3863492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437523 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1101184, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3446832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-11 01:01:41.437533 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1101197, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3467228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437548 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1050601, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3863492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437569 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1101213, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3520706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437579 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101179, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3426833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437590 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101214, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3526893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437599 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1101213, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3520706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437609 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1101175, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3417602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437625 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101179, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3426833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437635 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101179, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3426833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437655 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1101192, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3446832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437665 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1101202, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3496833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437675 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1101175, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3417602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437685 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101173, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3415296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437695 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1101175, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3417602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437711 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101179, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3426833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437721 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101214, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3526893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437740 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1101175, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3417602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437750 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1050601, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3863492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437760 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1101199, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3469257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437770 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1101211, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3514552, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-11 01:01:41.437780 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1101202, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3496833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437799 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1101202, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3496833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437809 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1101202, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3496833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437828 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101173, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3415296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437838 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1101199, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3469257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437848 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1101199, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3469257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437858 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1101213, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3520706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437874 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1101286, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3846838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437884 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:01:41.437894 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1101286, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3846838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437904 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:01:41.437914 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1101199, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3469257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437933 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1101286, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3846838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437943 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:01:41.437953 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1050601, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3863492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437963 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101179, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3426833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437973 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1101286, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3846838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.437988 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:01:41.437998 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1101213, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3520706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.438008 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1101175, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3417602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.438046 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1101178, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3421876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-11 01:01:41.438069 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101179, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3426833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.438097 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1101202, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3496833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.438107 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1101175, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3417602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.438125 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1101199, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3469257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.438135 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1101202, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3496833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.438145 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1101286, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3846838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.438155 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:01:41.438165 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1101195, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3456833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-11 01:01:41.438183 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1101199, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3469257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.438194 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1101286, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3846838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-11 01:01:41.438204 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:01:41.438213 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1101209, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3506832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-11 01:01:41.438229 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1101197, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3467228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-11 01:01:41.438239 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1101192, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3446832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-11 01:01:41.438249 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101214, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3526893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-11 01:01:41.438259 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101173, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3415296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-11 01:01:41.438278 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1050601, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3863492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-11 01:01:41.438289 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1101213, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3520706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-11 01:01:41.438299 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101179, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3426833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-11 01:01:41.438314 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1101175, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3417602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-11 01:01:41.438324 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1101202, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3496833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-11 01:01:41.438334 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1101199, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3469257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-11 01:01:41.438344 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1101286, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3846838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-11 01:01:41.438354 | orchestrator | 2025-09-11 01:01:41.438364 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-09-11 01:01:41.438374 | orchestrator | Thursday 11 September 2025 00:59:19 +0000 (0:00:22.445) 0:00:46.295 **** 2025-09-11 01:01:41.438387 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-11 01:01:41.438397 | orchestrator | 2025-09-11 01:01:41.438407 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-09-11 01:01:41.438417 | orchestrator | Thursday 11 September 2025 00:59:20 +0000 (0:00:00.617) 0:00:46.913 **** 2025-09-11 01:01:41.438427 | orchestrator | [WARNING]: Skipped 2025-09-11 01:01:41.438437 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-11 01:01:41.438451 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-09-11 01:01:41.438461 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-11 01:01:41.438470 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-09-11 01:01:41.438480 | orchestrator | [WARNING]: Skipped 2025-09-11 01:01:41.438490 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-11 01:01:41.438505 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-09-11 01:01:41.438515 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-11 01:01:41.438524 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-09-11 01:01:41.438533 | orchestrator | [WARNING]: Skipped 2025-09-11 01:01:41.438543 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-11 01:01:41.438552 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-09-11 01:01:41.438562 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-11 01:01:41.438571 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-09-11 01:01:41.438581 | orchestrator | [WARNING]: Skipped 2025-09-11 01:01:41.438590 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-11 01:01:41.438600 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-09-11 01:01:41.438609 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-11 01:01:41.438619 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-09-11 01:01:41.438628 | orchestrator | [WARNING]: Skipped 2025-09-11 01:01:41.438638 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-11 01:01:41.438647 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-09-11 01:01:41.438656 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-11 01:01:41.438666 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-09-11 01:01:41.438675 | orchestrator | [WARNING]: Skipped 2025-09-11 01:01:41.438685 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-11 01:01:41.438695 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-09-11 01:01:41.438704 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-11 01:01:41.438713 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-09-11 01:01:41.438723 | orchestrator | [WARNING]: Skipped 2025-09-11 01:01:41.438732 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-11 01:01:41.438742 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-09-11 01:01:41.438751 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-11 01:01:41.438761 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-09-11 01:01:41.438770 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-11 01:01:41.438780 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-11 01:01:41.438789 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-11 01:01:41.438798 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-11 01:01:41.438808 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-11 01:01:41.438817 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-11 01:01:41.438827 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-11 01:01:41.438836 | orchestrator | 2025-09-11 01:01:41.438846 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-09-11 01:01:41.438855 | orchestrator | Thursday 11 September 2025 00:59:23 +0000 (0:00:02.848) 0:00:49.761 **** 2025-09-11 01:01:41.438864 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-11 01:01:41.438874 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:01:41.438883 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-11 01:01:41.438893 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:01:41.438903 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-11 01:01:41.438912 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:01:41.438922 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-11 01:01:41.438937 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:01:41.438947 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-11 01:01:41.438956 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:01:41.438966 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-11 01:01:41.438975 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:01:41.438984 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-09-11 01:01:41.438994 | orchestrator | 2025-09-11 01:01:41.439003 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-09-11 01:01:41.439013 | orchestrator | Thursday 11 September 2025 00:59:38 +0000 (0:00:15.516) 0:01:05.278 **** 2025-09-11 01:01:41.439028 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-11 01:01:41.439037 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:01:41.439047 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-11 01:01:41.439057 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:01:41.439066 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-11 01:01:41.439126 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:01:41.439139 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-11 01:01:41.439148 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:01:41.439158 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-11 01:01:41.439167 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:01:41.439177 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-11 01:01:41.439186 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:01:41.439196 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-09-11 01:01:41.439205 | orchestrator | 2025-09-11 01:01:41.439215 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-09-11 01:01:41.439224 | orchestrator | Thursday 11 September 2025 00:59:42 +0000 (0:00:03.362) 0:01:08.641 **** 2025-09-11 01:01:41.439234 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-11 01:01:41.439244 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-11 01:01:41.439253 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-09-11 01:01:41.439263 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:01:41.439273 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:01:41.439282 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-11 01:01:41.439292 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:01:41.439301 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-11 01:01:41.439311 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:01:41.439321 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-11 01:01:41.439330 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:01:41.439340 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-11 01:01:41.439349 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:01:41.439370 | orchestrator | 2025-09-11 01:01:41.439380 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-09-11 01:01:41.439389 | orchestrator | Thursday 11 September 2025 00:59:44 +0000 (0:00:02.571) 0:01:11.213 **** 2025-09-11 01:01:41.439399 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-11 01:01:41.439408 | orchestrator | 2025-09-11 01:01:41.439418 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-09-11 01:01:41.439427 | orchestrator | Thursday 11 September 2025 00:59:45 +0000 (0:00:01.025) 0:01:12.238 **** 2025-09-11 01:01:41.439437 | orchestrator | skipping: [testbed-manager] 2025-09-11 01:01:41.439446 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:01:41.439456 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:01:41.439465 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:01:41.439475 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:01:41.439484 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:01:41.439494 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:01:41.439503 | orchestrator | 2025-09-11 01:01:41.439510 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-09-11 01:01:41.439518 | orchestrator | Thursday 11 September 2025 00:59:46 +0000 (0:00:00.490) 0:01:12.729 **** 2025-09-11 01:01:41.439526 | orchestrator | skipping: [testbed-manager] 2025-09-11 01:01:41.439534 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:01:41.439542 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:01:41.439549 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:01:41.439557 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:01:41.439565 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:01:41.439572 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:01:41.439580 | orchestrator | 2025-09-11 01:01:41.439588 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-09-11 01:01:41.439596 | orchestrator | Thursday 11 September 2025 00:59:49 +0000 (0:00:02.934) 0:01:15.663 **** 2025-09-11 01:01:41.439604 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-11 01:01:41.439612 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-11 01:01:41.439619 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:01:41.439627 | orchestrator | skipping: [testbed-manager] 2025-09-11 01:01:41.439635 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-11 01:01:41.439643 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:01:41.439650 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-11 01:01:41.439658 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:01:41.439670 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-11 01:01:41.439678 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:01:41.439686 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-11 01:01:41.439694 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:01:41.439706 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-11 01:01:41.439714 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:01:41.439722 | orchestrator | 2025-09-11 01:01:41.439729 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-09-11 01:01:41.439737 | orchestrator | Thursday 11 September 2025 00:59:51 +0000 (0:00:02.138) 0:01:17.803 **** 2025-09-11 01:01:41.439745 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-11 01:01:41.439753 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-11 01:01:41.439761 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:01:41.439769 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:01:41.439777 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-11 01:01:41.439790 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:01:41.439798 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-11 01:01:41.439806 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:01:41.439813 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-11 01:01:41.439821 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:01:41.439829 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-09-11 01:01:41.439837 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-11 01:01:41.439845 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:01:41.439852 | orchestrator | 2025-09-11 01:01:41.439860 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-09-11 01:01:41.439868 | orchestrator | Thursday 11 September 2025 00:59:53 +0000 (0:00:01.974) 0:01:19.777 **** 2025-09-11 01:01:41.439876 | orchestrator | [WARNING]: Skipped 2025-09-11 01:01:41.439884 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-09-11 01:01:41.439891 | orchestrator | due to this access issue: 2025-09-11 01:01:41.439899 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-09-11 01:01:41.439907 | orchestrator | not a directory 2025-09-11 01:01:41.439915 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-11 01:01:41.439923 | orchestrator | 2025-09-11 01:01:41.439931 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-09-11 01:01:41.439938 | orchestrator | Thursday 11 September 2025 00:59:55 +0000 (0:00:02.247) 0:01:22.025 **** 2025-09-11 01:01:41.439946 | orchestrator | skipping: [testbed-manager] 2025-09-11 01:01:41.439954 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:01:41.439962 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:01:41.439969 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:01:41.439977 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:01:41.439985 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:01:41.439992 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:01:41.440000 | orchestrator | 2025-09-11 01:01:41.440008 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-09-11 01:01:41.440015 | orchestrator | Thursday 11 September 2025 00:59:56 +0000 (0:00:01.147) 0:01:23.172 **** 2025-09-11 01:01:41.440023 | orchestrator | skipping: [testbed-manager] 2025-09-11 01:01:41.440031 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:01:41.440038 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:01:41.440046 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:01:41.440054 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:01:41.440061 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:01:41.440069 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:01:41.440077 | orchestrator | 2025-09-11 01:01:41.440101 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-09-11 01:01:41.440109 | orchestrator | Thursday 11 September 2025 00:59:57 +0000 (0:00:01.082) 0:01:24.255 **** 2025-09-11 01:01:41.440117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-11 01:01:41.440130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-11 01:01:41.440148 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-11 01:01:41.440157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-11 01:01:41.440166 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-11 01:01:41.440174 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-11 01:01:41.440182 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-11 01:01:41.440190 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-11 01:01:41.440198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.440215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.440227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.440236 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.440244 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.440253 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-11 01:01:41.440262 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.440275 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.440292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.440300 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.440309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.440317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.440325 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.440333 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.440341 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.440353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.440368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.440380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-11 01:01:41.440389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.440397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.440405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-11 01:01:41.440413 | orchestrator | 2025-09-11 01:01:41.440421 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-09-11 01:01:41.440429 | orchestrator | Thursday 11 September 2025 01:00:03 +0000 (0:00:06.051) 0:01:30.306 **** 2025-09-11 01:01:41.440436 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-11 01:01:41.440444 | orchestrator | skipping: [testbed-manager] 2025-09-11 01:01:41.440452 | orchestrator | 2025-09-11 01:01:41.440460 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-11 01:01:41.440472 | orchestrator | Thursday 11 September 2025 01:00:04 +0000 (0:00:00.949) 0:01:31.256 **** 2025-09-11 01:01:41.440480 | orchestrator | 2025-09-11 01:01:41.440488 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-11 01:01:41.440496 | orchestrator | Thursday 11 September 2025 01:00:04 +0000 (0:00:00.063) 0:01:31.320 **** 2025-09-11 01:01:41.440503 | orchestrator | 2025-09-11 01:01:41.440511 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-11 01:01:41.440519 | orchestrator | Thursday 11 September 2025 01:00:05 +0000 (0:00:00.056) 0:01:31.377 **** 2025-09-11 01:01:41.440526 | orchestrator | 2025-09-11 01:01:41.440534 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-11 01:01:41.440542 | orchestrator | Thursday 11 September 2025 01:00:05 +0000 (0:00:00.075) 0:01:31.453 **** 2025-09-11 01:01:41.440550 | orchestrator | 2025-09-11 01:01:41.440557 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-11 01:01:41.440565 | orchestrator | Thursday 11 September 2025 01:00:05 +0000 (0:00:00.279) 0:01:31.732 **** 2025-09-11 01:01:41.440573 | orchestrator | 2025-09-11 01:01:41.440580 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-11 01:01:41.440588 | orchestrator | Thursday 11 September 2025 01:00:05 +0000 (0:00:00.122) 0:01:31.854 **** 2025-09-11 01:01:41.440596 | orchestrator | 2025-09-11 01:01:41.440604 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-11 01:01:41.440611 | orchestrator | Thursday 11 September 2025 01:00:05 +0000 (0:00:00.110) 0:01:31.964 **** 2025-09-11 01:01:41.440619 | orchestrator | 2025-09-11 01:01:41.440627 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-09-11 01:01:41.440634 | orchestrator | Thursday 11 September 2025 01:00:05 +0000 (0:00:00.143) 0:01:32.107 **** 2025-09-11 01:01:41.440642 | orchestrator | changed: [testbed-manager] 2025-09-11 01:01:41.440650 | orchestrator | 2025-09-11 01:01:41.440658 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-09-11 01:01:41.440670 | orchestrator | Thursday 11 September 2025 01:00:27 +0000 (0:00:22.082) 0:01:54.190 **** 2025-09-11 01:01:41.440678 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:01:41.440685 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:01:41.440693 | orchestrator | changed: [testbed-manager] 2025-09-11 01:01:41.440701 | orchestrator | changed: [testbed-node-3] 2025-09-11 01:01:41.440709 | orchestrator | changed: [testbed-node-5] 2025-09-11 01:01:41.440716 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:01:41.440724 | orchestrator | changed: [testbed-node-4] 2025-09-11 01:01:41.440732 | orchestrator | 2025-09-11 01:01:41.440743 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-09-11 01:01:41.440751 | orchestrator | Thursday 11 September 2025 01:00:40 +0000 (0:00:12.361) 0:02:06.551 **** 2025-09-11 01:01:41.440759 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:01:41.440766 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:01:41.440774 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:01:41.440782 | orchestrator | 2025-09-11 01:01:41.440789 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-09-11 01:01:41.440797 | orchestrator | Thursday 11 September 2025 01:00:50 +0000 (0:00:09.800) 0:02:16.351 **** 2025-09-11 01:01:41.440805 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:01:41.440812 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:01:41.440820 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:01:41.440828 | orchestrator | 2025-09-11 01:01:41.440836 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-09-11 01:01:41.440843 | orchestrator | Thursday 11 September 2025 01:01:00 +0000 (0:00:10.246) 0:02:26.598 **** 2025-09-11 01:01:41.440851 | orchestrator | changed: [testbed-manager] 2025-09-11 01:01:41.440859 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:01:41.440866 | orchestrator | changed: [testbed-node-3] 2025-09-11 01:01:41.440874 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:01:41.440882 | orchestrator | changed: [testbed-node-5] 2025-09-11 01:01:41.440895 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:01:41.440903 | orchestrator | changed: [testbed-node-4] 2025-09-11 01:01:41.440911 | orchestrator | 2025-09-11 01:01:41.440918 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-09-11 01:01:41.440926 | orchestrator | Thursday 11 September 2025 01:01:14 +0000 (0:00:14.372) 0:02:40.971 **** 2025-09-11 01:01:41.440934 | orchestrator | changed: [testbed-manager] 2025-09-11 01:01:41.440942 | orchestrator | 2025-09-11 01:01:41.440950 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-09-11 01:01:41.440957 | orchestrator | Thursday 11 September 2025 01:01:22 +0000 (0:00:07.955) 0:02:48.926 **** 2025-09-11 01:01:41.440965 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:01:41.440973 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:01:41.440981 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:01:41.440988 | orchestrator | 2025-09-11 01:01:41.440996 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-09-11 01:01:41.441004 | orchestrator | Thursday 11 September 2025 01:01:27 +0000 (0:00:04.723) 0:02:53.650 **** 2025-09-11 01:01:41.441012 | orchestrator | changed: [testbed-manager] 2025-09-11 01:01:41.441019 | orchestrator | 2025-09-11 01:01:41.441027 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-09-11 01:01:41.441035 | orchestrator | Thursday 11 September 2025 01:01:31 +0000 (0:00:04.288) 0:02:57.938 **** 2025-09-11 01:01:41.441043 | orchestrator | changed: [testbed-node-3] 2025-09-11 01:01:41.441050 | orchestrator | changed: [testbed-node-4] 2025-09-11 01:01:41.441058 | orchestrator | changed: [testbed-node-5] 2025-09-11 01:01:41.441066 | orchestrator | 2025-09-11 01:01:41.441074 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 01:01:41.441098 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-11 01:01:41.441107 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-11 01:01:41.441115 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-11 01:01:41.441123 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-11 01:01:41.441130 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-11 01:01:41.441138 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-11 01:01:41.441146 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-11 01:01:41.441153 | orchestrator | 2025-09-11 01:01:41.441161 | orchestrator | 2025-09-11 01:01:41.441169 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 01:01:41.441177 | orchestrator | Thursday 11 September 2025 01:01:38 +0000 (0:00:06.993) 0:03:04.932 **** 2025-09-11 01:01:41.441185 | orchestrator | =============================================================================== 2025-09-11 01:01:41.441192 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 22.45s 2025-09-11 01:01:41.441200 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 22.08s 2025-09-11 01:01:41.441208 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 15.52s 2025-09-11 01:01:41.441216 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.37s 2025-09-11 01:01:41.441228 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 12.36s 2025-09-11 01:01:41.441241 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.25s 2025-09-11 01:01:41.441249 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 9.80s 2025-09-11 01:01:41.441257 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 7.96s 2025-09-11 01:01:41.441265 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 6.99s 2025-09-11 01:01:41.441273 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.22s 2025-09-11 01:01:41.441281 | orchestrator | prometheus : Check prometheus containers -------------------------------- 6.05s 2025-09-11 01:01:41.441289 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.17s 2025-09-11 01:01:41.441297 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 4.72s 2025-09-11 01:01:41.441304 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 4.29s 2025-09-11 01:01:41.441312 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.47s 2025-09-11 01:01:41.441320 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.36s 2025-09-11 01:01:41.441327 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.94s 2025-09-11 01:01:41.441335 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.85s 2025-09-11 01:01:41.441343 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.57s 2025-09-11 01:01:41.441351 | orchestrator | prometheus : Find extra prometheus server config files ------------------ 2.25s 2025-09-11 01:01:41.441387 | orchestrator | 2025-09-11 01:01:41 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:01:41.441395 | orchestrator | 2025-09-11 01:01:41 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:01:41.441404 | orchestrator | 2025-09-11 01:01:41 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:01:44.491320 | orchestrator | 2025-09-11 01:01:44 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:01:44.493139 | orchestrator | 2025-09-11 01:01:44 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:01:44.495009 | orchestrator | 2025-09-11 01:01:44 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:01:44.498780 | orchestrator | 2025-09-11 01:01:44 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:01:44.498827 | orchestrator | 2025-09-11 01:01:44 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:01:47.540137 | orchestrator | 2025-09-11 01:01:47 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:01:47.540539 | orchestrator | 2025-09-11 01:01:47 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:01:47.542137 | orchestrator | 2025-09-11 01:01:47 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:01:47.542809 | orchestrator | 2025-09-11 01:01:47 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:01:47.542835 | orchestrator | 2025-09-11 01:01:47 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:01:50.583243 | orchestrator | 2025-09-11 01:01:50 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:01:50.584664 | orchestrator | 2025-09-11 01:01:50 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:01:50.586689 | orchestrator | 2025-09-11 01:01:50 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:01:50.588127 | orchestrator | 2025-09-11 01:01:50 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:01:50.588502 | orchestrator | 2025-09-11 01:01:50 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:01:53.633602 | orchestrator | 2025-09-11 01:01:53 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:01:53.635163 | orchestrator | 2025-09-11 01:01:53 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:01:53.636768 | orchestrator | 2025-09-11 01:01:53 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:01:53.638297 | orchestrator | 2025-09-11 01:01:53 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:01:53.638641 | orchestrator | 2025-09-11 01:01:53 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:01:56.683399 | orchestrator | 2025-09-11 01:01:56 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:01:56.685640 | orchestrator | 2025-09-11 01:01:56 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:01:56.687851 | orchestrator | 2025-09-11 01:01:56 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:01:56.689838 | orchestrator | 2025-09-11 01:01:56 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:01:56.689978 | orchestrator | 2025-09-11 01:01:56 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:01:59.732170 | orchestrator | 2025-09-11 01:01:59 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:01:59.733481 | orchestrator | 2025-09-11 01:01:59 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:01:59.735329 | orchestrator | 2025-09-11 01:01:59 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:01:59.736996 | orchestrator | 2025-09-11 01:01:59 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:01:59.737587 | orchestrator | 2025-09-11 01:01:59 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:02:02.782701 | orchestrator | 2025-09-11 01:02:02 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:02:02.785022 | orchestrator | 2025-09-11 01:02:02 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:02:02.786732 | orchestrator | 2025-09-11 01:02:02 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:02:02.788430 | orchestrator | 2025-09-11 01:02:02 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:02:02.788885 | orchestrator | 2025-09-11 01:02:02 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:02:05.833013 | orchestrator | 2025-09-11 01:02:05 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:02:05.834703 | orchestrator | 2025-09-11 01:02:05 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:02:05.836474 | orchestrator | 2025-09-11 01:02:05 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:02:05.838413 | orchestrator | 2025-09-11 01:02:05 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:02:05.838449 | orchestrator | 2025-09-11 01:02:05 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:02:08.941016 | orchestrator | 2025-09-11 01:02:08 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:02:08.941144 | orchestrator | 2025-09-11 01:02:08 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:02:08.941160 | orchestrator | 2025-09-11 01:02:08 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:02:08.942160 | orchestrator | 2025-09-11 01:02:08 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:02:08.942189 | orchestrator | 2025-09-11 01:02:08 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:02:11.990160 | orchestrator | 2025-09-11 01:02:11 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:02:11.991298 | orchestrator | 2025-09-11 01:02:11 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:02:11.992748 | orchestrator | 2025-09-11 01:02:11 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:02:11.993796 | orchestrator | 2025-09-11 01:02:11 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:02:11.994132 | orchestrator | 2025-09-11 01:02:11 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:02:15.063991 | orchestrator | 2025-09-11 01:02:15 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:02:15.064116 | orchestrator | 2025-09-11 01:02:15 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:02:15.065315 | orchestrator | 2025-09-11 01:02:15 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:02:15.067599 | orchestrator | 2025-09-11 01:02:15 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:02:15.067640 | orchestrator | 2025-09-11 01:02:15 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:02:18.102374 | orchestrator | 2025-09-11 01:02:18 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:02:18.103047 | orchestrator | 2025-09-11 01:02:18 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:02:18.103652 | orchestrator | 2025-09-11 01:02:18 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:02:18.104382 | orchestrator | 2025-09-11 01:02:18 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:02:18.104428 | orchestrator | 2025-09-11 01:02:18 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:02:21.137292 | orchestrator | 2025-09-11 01:02:21 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:02:21.138846 | orchestrator | 2025-09-11 01:02:21 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state STARTED 2025-09-11 01:02:21.140554 | orchestrator | 2025-09-11 01:02:21 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:02:21.142270 | orchestrator | 2025-09-11 01:02:21 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:02:21.142846 | orchestrator | 2025-09-11 01:02:21 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:02:24.164477 | orchestrator | 2025-09-11 01:02:24 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:02:24.164564 | orchestrator | 2025-09-11 01:02:24 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:02:24.166978 | orchestrator | 2025-09-11 01:02:24 | INFO  | Task bc3facad-740a-4413-b9b4-b4b921f4cfa1 is in state SUCCESS 2025-09-11 01:02:24.167146 | orchestrator | 2025-09-11 01:02:24.168559 | orchestrator | 2025-09-11 01:02:24.168585 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-11 01:02:24.168597 | orchestrator | 2025-09-11 01:02:24.168609 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-11 01:02:24.168656 | orchestrator | Thursday 11 September 2025 00:58:48 +0000 (0:00:00.334) 0:00:00.334 **** 2025-09-11 01:02:24.168690 | orchestrator | ok: [testbed-node-0] 2025-09-11 01:02:24.168703 | orchestrator | ok: [testbed-node-1] 2025-09-11 01:02:24.168714 | orchestrator | ok: [testbed-node-2] 2025-09-11 01:02:24.168724 | orchestrator | ok: [testbed-node-3] 2025-09-11 01:02:24.168733 | orchestrator | ok: [testbed-node-4] 2025-09-11 01:02:24.168742 | orchestrator | ok: [testbed-node-5] 2025-09-11 01:02:24.168752 | orchestrator | 2025-09-11 01:02:24.168761 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-11 01:02:24.168771 | orchestrator | Thursday 11 September 2025 00:58:49 +0000 (0:00:00.719) 0:00:01.053 **** 2025-09-11 01:02:24.168780 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-09-11 01:02:24.168848 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-09-11 01:02:24.168859 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-09-11 01:02:24.168869 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-09-11 01:02:24.168878 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-09-11 01:02:24.168887 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-09-11 01:02:24.168897 | orchestrator | 2025-09-11 01:02:24.168907 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-09-11 01:02:24.168916 | orchestrator | 2025-09-11 01:02:24.168926 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-11 01:02:24.168935 | orchestrator | Thursday 11 September 2025 00:58:50 +0000 (0:00:00.674) 0:00:01.728 **** 2025-09-11 01:02:24.168945 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 01:02:24.168956 | orchestrator | 2025-09-11 01:02:24.168965 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-09-11 01:02:24.168975 | orchestrator | Thursday 11 September 2025 00:58:51 +0000 (0:00:01.099) 0:00:02.827 **** 2025-09-11 01:02:24.168985 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-09-11 01:02:24.168994 | orchestrator | 2025-09-11 01:02:24.169004 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-09-11 01:02:24.169013 | orchestrator | Thursday 11 September 2025 00:58:54 +0000 (0:00:03.145) 0:00:05.972 **** 2025-09-11 01:02:24.169023 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-09-11 01:02:24.169033 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-09-11 01:02:24.169042 | orchestrator | 2025-09-11 01:02:24.169052 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-09-11 01:02:24.169061 | orchestrator | Thursday 11 September 2025 00:59:01 +0000 (0:00:06.871) 0:00:12.844 **** 2025-09-11 01:02:24.169091 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-11 01:02:24.169101 | orchestrator | 2025-09-11 01:02:24.169123 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-09-11 01:02:24.169134 | orchestrator | Thursday 11 September 2025 00:59:05 +0000 (0:00:03.680) 0:00:16.525 **** 2025-09-11 01:02:24.169143 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-11 01:02:24.169153 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-09-11 01:02:24.169162 | orchestrator | 2025-09-11 01:02:24.169172 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-09-11 01:02:24.169181 | orchestrator | Thursday 11 September 2025 00:59:09 +0000 (0:00:04.286) 0:00:20.811 **** 2025-09-11 01:02:24.169190 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-11 01:02:24.169210 | orchestrator | 2025-09-11 01:02:24.169220 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-09-11 01:02:24.169230 | orchestrator | Thursday 11 September 2025 00:59:12 +0000 (0:00:03.591) 0:00:24.402 **** 2025-09-11 01:02:24.169263 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-09-11 01:02:24.169281 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-09-11 01:02:24.169291 | orchestrator | 2025-09-11 01:02:24.169300 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-09-11 01:02:24.169310 | orchestrator | Thursday 11 September 2025 00:59:21 +0000 (0:00:08.977) 0:00:33.379 **** 2025-09-11 01:02:24.169358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-11 01:02:24.169385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-11 01:02:24.169397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.169408 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.169418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-11 01:02:24.169438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.169455 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.169466 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.169476 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.169487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.169506 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.169517 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.169527 | orchestrator | 2025-09-11 01:02:24.169553 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-11 01:02:24.169563 | orchestrator | Thursday 11 September 2025 00:59:24 +0000 (0:00:02.762) 0:00:36.142 **** 2025-09-11 01:02:24.169573 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:02:24.169583 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:02:24.169592 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:02:24.169601 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:02:24.169611 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:02:24.169620 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:02:24.169630 | orchestrator | 2025-09-11 01:02:24.169639 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-11 01:02:24.169649 | orchestrator | Thursday 11 September 2025 00:59:25 +0000 (0:00:00.581) 0:00:36.724 **** 2025-09-11 01:02:24.169658 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:02:24.169668 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:02:24.169677 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:02:24.169687 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 01:02:24.169696 | orchestrator | 2025-09-11 01:02:24.169706 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-09-11 01:02:24.169715 | orchestrator | Thursday 11 September 2025 00:59:25 +0000 (0:00:00.672) 0:00:37.397 **** 2025-09-11 01:02:24.169725 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-09-11 01:02:24.169734 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-09-11 01:02:24.169744 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-09-11 01:02:24.169753 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-09-11 01:02:24.169762 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-09-11 01:02:24.169772 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-09-11 01:02:24.169781 | orchestrator | 2025-09-11 01:02:24.169790 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-09-11 01:02:24.169800 | orchestrator | Thursday 11 September 2025 00:59:27 +0000 (0:00:01.668) 0:00:39.066 **** 2025-09-11 01:02:24.169810 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-11 01:02:24.169827 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-11 01:02:24.169841 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-11 01:02:24.169857 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-11 01:02:24.169868 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-11 01:02:24.169878 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-11 01:02:24.169893 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-11 01:02:24.169908 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-11 01:02:24.169924 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-11 01:02:24.169935 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-11 01:02:24.169950 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-11 01:02:24.169960 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-11 01:02:24.169970 | orchestrator | 2025-09-11 01:02:24.169980 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-09-11 01:02:24.169990 | orchestrator | Thursday 11 September 2025 00:59:31 +0000 (0:00:03.613) 0:00:42.679 **** 2025-09-11 01:02:24.170003 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-11 01:02:24.170052 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-11 01:02:24.170107 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-11 01:02:24.170119 | orchestrator | 2025-09-11 01:02:24.170129 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-09-11 01:02:24.170138 | orchestrator | Thursday 11 September 2025 00:59:33 +0000 (0:00:01.917) 0:00:44.597 **** 2025-09-11 01:02:24.170148 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-09-11 01:02:24.170157 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-09-11 01:02:24.170167 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-09-11 01:02:24.170176 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-09-11 01:02:24.170186 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-09-11 01:02:24.170202 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-09-11 01:02:24.170212 | orchestrator | 2025-09-11 01:02:24.170221 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-09-11 01:02:24.170231 | orchestrator | Thursday 11 September 2025 00:59:36 +0000 (0:00:03.059) 0:00:47.656 **** 2025-09-11 01:02:24.170240 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-09-11 01:02:24.170250 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-09-11 01:02:24.170259 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-09-11 01:02:24.170268 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-09-11 01:02:24.170278 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-09-11 01:02:24.170287 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-09-11 01:02:24.170297 | orchestrator | 2025-09-11 01:02:24.170306 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-09-11 01:02:24.170323 | orchestrator | Thursday 11 September 2025 00:59:37 +0000 (0:00:01.099) 0:00:48.756 **** 2025-09-11 01:02:24.170332 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:02:24.170342 | orchestrator | 2025-09-11 01:02:24.170351 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-09-11 01:02:24.170361 | orchestrator | Thursday 11 September 2025 00:59:37 +0000 (0:00:00.168) 0:00:48.924 **** 2025-09-11 01:02:24.170370 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:02:24.170380 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:02:24.170389 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:02:24.170398 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:02:24.170408 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:02:24.170417 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:02:24.170426 | orchestrator | 2025-09-11 01:02:24.170436 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-11 01:02:24.170445 | orchestrator | Thursday 11 September 2025 00:59:38 +0000 (0:00:00.872) 0:00:49.796 **** 2025-09-11 01:02:24.170456 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 01:02:24.170466 | orchestrator | 2025-09-11 01:02:24.170475 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-09-11 01:02:24.170485 | orchestrator | Thursday 11 September 2025 00:59:39 +0000 (0:00:01.066) 0:00:50.862 **** 2025-09-11 01:02:24.170495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-11 01:02:24.170515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-11 01:02:24.170539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-11 01:02:24.170556 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.170566 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.170576 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.170591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.170601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.170618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.170634 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.170644 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.170654 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.170664 | orchestrator | 2025-09-11 01:02:24.170673 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-09-11 01:02:24.170683 | orchestrator | Thursday 11 September 2025 00:59:42 +0000 (0:00:03.509) 0:00:54.372 **** 2025-09-11 01:02:24.170698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-11 01:02:24.170713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-11 01:02:24.170729 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:02:24.170739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-11 01:02:24.170749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-11 01:02:24.170759 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:02:24.170769 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-11 01:02:24.170780 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-11 01:02:24.170789 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:02:24.170803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-11 01:02:24.170824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-11 01:02:24.170835 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:02:24.170845 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-11 01:02:24.170855 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-11 01:02:24.170865 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:02:24.170875 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-11 01:02:24.170889 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-11 01:02:24.170904 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:02:24.170914 | orchestrator | 2025-09-11 01:02:24.170924 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-09-11 01:02:24.170933 | orchestrator | Thursday 11 September 2025 00:59:44 +0000 (0:00:02.002) 0:00:56.375 **** 2025-09-11 01:02:24.170949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-11 01:02:24.170959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-11 01:02:24.170969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-11 01:02:24.170979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-11 01:02:24.170989 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:02:24.170999 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:02:24.171013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-11 01:02:24.171034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-11 01:02:24.171045 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:02:24.171054 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-11 01:02:24.171076 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-11 01:02:24.171087 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-11 01:02:24.171097 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:02:24.171116 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-11 01:02:24.171127 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:02:24.171142 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-11 01:02:24.171153 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-11 01:02:24.171162 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:02:24.171172 | orchestrator | 2025-09-11 01:02:24.171181 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-09-11 01:02:24.171191 | orchestrator | Thursday 11 September 2025 00:59:46 +0000 (0:00:01.552) 0:00:57.927 **** 2025-09-11 01:02:24.171201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-11 01:02:24.171211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-11 01:02:24.171233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-11 01:02:24.171249 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.171259 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.171269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.171279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.171298 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.171309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.171324 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.171335 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.171345 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.171354 | orchestrator | 2025-09-11 01:02:24.171364 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-09-11 01:02:24.171379 | orchestrator | Thursday 11 September 2025 00:59:50 +0000 (0:00:03.909) 0:01:01.836 **** 2025-09-11 01:02:24.171389 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-11 01:02:24.171399 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:02:24.171408 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-11 01:02:24.171418 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:02:24.171428 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-11 01:02:24.171437 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:02:24.171447 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-11 01:02:24.171456 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-11 01:02:24.171466 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-11 01:02:24.171475 | orchestrator | 2025-09-11 01:02:24.171485 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-09-11 01:02:24.171494 | orchestrator | Thursday 11 September 2025 00:59:52 +0000 (0:00:02.032) 0:01:03.868 **** 2025-09-11 01:02:24.171508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-11 01:02:24.171524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-11 01:02:24.171535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-11 01:02:24.171545 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.171564 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.171579 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.171589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.171599 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.171609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.171624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.171638 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.171649 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.171658 | orchestrator | 2025-09-11 01:02:24.171668 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-09-11 01:02:24.171678 | orchestrator | Thursday 11 September 2025 01:00:02 +0000 (0:00:10.166) 0:01:14.035 **** 2025-09-11 01:02:24.171692 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:02:24.171702 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:02:24.171711 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:02:24.171720 | orchestrator | changed: [testbed-node-3] 2025-09-11 01:02:24.171730 | orchestrator | changed: [testbed-node-4] 2025-09-11 01:02:24.171739 | orchestrator | changed: [testbed-node-5] 2025-09-11 01:02:24.171748 | orchestrator | 2025-09-11 01:02:24.171758 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-09-11 01:02:24.171767 | orchestrator | Thursday 11 September 2025 01:00:04 +0000 (0:00:02.129) 0:01:16.165 **** 2025-09-11 01:02:24.171777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-11 01:02:24.171792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-11 01:02:24.171802 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:02:24.171812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-11 01:02:24.171828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-11 01:02:24.171838 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:02:24.171853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-11 01:02:24.171863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-11 01:02:24.171878 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:02:24.171888 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-11 01:02:24.171898 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-11 01:02:24.171908 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:02:24.171922 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-11 01:02:24.171932 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-11 01:02:24.171942 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:02:24.171957 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-11 01:02:24.171975 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-11 01:02:24.171985 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:02:24.171994 | orchestrator | 2025-09-11 01:02:24.172004 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-09-11 01:02:24.172013 | orchestrator | Thursday 11 September 2025 01:00:06 +0000 (0:00:01.424) 0:01:17.589 **** 2025-09-11 01:02:24.172023 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:02:24.172032 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:02:24.172042 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:02:24.172051 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:02:24.172060 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:02:24.172084 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:02:24.172094 | orchestrator | 2025-09-11 01:02:24.172104 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-09-11 01:02:24.172113 | orchestrator | Thursday 11 September 2025 01:00:07 +0000 (0:00:00.996) 0:01:18.586 **** 2025-09-11 01:02:24.172123 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.172137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-11 01:02:24.172153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-11 01:02:24.172172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-11 01:02:24.172183 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.172192 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.172206 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.172223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.172239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.172249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.172259 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.172270 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-11 01:02:24.172279 | orchestrator | 2025-09-11 01:02:24.172289 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-11 01:02:24.172299 | orchestrator | Thursday 11 September 2025 01:00:09 +0000 (0:00:02.481) 0:01:21.068 **** 2025-09-11 01:02:24.172308 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:02:24.172318 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:02:24.172327 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:02:24.172340 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:02:24.172350 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:02:24.172359 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:02:24.172368 | orchestrator | 2025-09-11 01:02:24.172378 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-09-11 01:02:24.172387 | orchestrator | Thursday 11 September 2025 01:00:10 +0000 (0:00:00.619) 0:01:21.687 **** 2025-09-11 01:02:24.172397 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:02:24.172415 | orchestrator | 2025-09-11 01:02:24.172424 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-09-11 01:02:24.172434 | orchestrator | Thursday 11 September 2025 01:00:12 +0000 (0:00:02.312) 0:01:23.999 **** 2025-09-11 01:02:24.172443 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:02:24.172453 | orchestrator | 2025-09-11 01:02:24.172462 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-09-11 01:02:24.172472 | orchestrator | Thursday 11 September 2025 01:00:14 +0000 (0:00:02.404) 0:01:26.404 **** 2025-09-11 01:02:24.172481 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:02:24.172491 | orchestrator | 2025-09-11 01:02:24.172500 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-11 01:02:24.172510 | orchestrator | Thursday 11 September 2025 01:00:33 +0000 (0:00:18.069) 0:01:44.474 **** 2025-09-11 01:02:24.172519 | orchestrator | 2025-09-11 01:02:24.172577 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-11 01:02:24.172589 | orchestrator | Thursday 11 September 2025 01:00:33 +0000 (0:00:00.100) 0:01:44.575 **** 2025-09-11 01:02:24.172598 | orchestrator | 2025-09-11 01:02:24.172608 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-11 01:02:24.172617 | orchestrator | Thursday 11 September 2025 01:00:33 +0000 (0:00:00.059) 0:01:44.635 **** 2025-09-11 01:02:24.172627 | orchestrator | 2025-09-11 01:02:24.172636 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-11 01:02:24.172645 | orchestrator | Thursday 11 September 2025 01:00:33 +0000 (0:00:00.065) 0:01:44.700 **** 2025-09-11 01:02:24.172655 | orchestrator | 2025-09-11 01:02:24.172664 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-11 01:02:24.172673 | orchestrator | Thursday 11 September 2025 01:00:33 +0000 (0:00:00.064) 0:01:44.764 **** 2025-09-11 01:02:24.172683 | orchestrator | 2025-09-11 01:02:24.172692 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-11 01:02:24.172701 | orchestrator | Thursday 11 September 2025 01:00:33 +0000 (0:00:00.061) 0:01:44.826 **** 2025-09-11 01:02:24.172711 | orchestrator | 2025-09-11 01:02:24.172720 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-09-11 01:02:24.172730 | orchestrator | Thursday 11 September 2025 01:00:33 +0000 (0:00:00.060) 0:01:44.887 **** 2025-09-11 01:02:24.172739 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:02:24.172748 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:02:24.172758 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:02:24.172767 | orchestrator | 2025-09-11 01:02:24.172777 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-09-11 01:02:24.172786 | orchestrator | Thursday 11 September 2025 01:00:51 +0000 (0:00:17.846) 0:02:02.734 **** 2025-09-11 01:02:24.172795 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:02:24.172805 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:02:24.172814 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:02:24.172824 | orchestrator | 2025-09-11 01:02:24.172833 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-09-11 01:02:24.172842 | orchestrator | Thursday 11 September 2025 01:01:02 +0000 (0:00:10.968) 0:02:13.702 **** 2025-09-11 01:02:24.172852 | orchestrator | changed: [testbed-node-4] 2025-09-11 01:02:24.172862 | orchestrator | changed: [testbed-node-3] 2025-09-11 01:02:24.172871 | orchestrator | changed: [testbed-node-5] 2025-09-11 01:02:24.172880 | orchestrator | 2025-09-11 01:02:24.172890 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-09-11 01:02:24.172899 | orchestrator | Thursday 11 September 2025 01:02:13 +0000 (0:01:11.094) 0:03:24.796 **** 2025-09-11 01:02:24.172909 | orchestrator | changed: [testbed-node-4] 2025-09-11 01:02:24.172918 | orchestrator | changed: [testbed-node-3] 2025-09-11 01:02:24.172927 | orchestrator | changed: [testbed-node-5] 2025-09-11 01:02:24.172936 | orchestrator | 2025-09-11 01:02:24.172946 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-09-11 01:02:24.172955 | orchestrator | Thursday 11 September 2025 01:02:20 +0000 (0:00:07.146) 0:03:31.943 **** 2025-09-11 01:02:24.172971 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:02:24.172981 | orchestrator | 2025-09-11 01:02:24.172990 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 01:02:24.173000 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-11 01:02:24.173010 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-11 01:02:24.173019 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-11 01:02:24.173029 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-11 01:02:24.173038 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-11 01:02:24.173048 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-11 01:02:24.173057 | orchestrator | 2025-09-11 01:02:24.173089 | orchestrator | 2025-09-11 01:02:24.173104 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 01:02:24.173113 | orchestrator | Thursday 11 September 2025 01:02:21 +0000 (0:00:00.591) 0:03:32.534 **** 2025-09-11 01:02:24.173123 | orchestrator | =============================================================================== 2025-09-11 01:02:24.173133 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 71.09s 2025-09-11 01:02:24.173142 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 18.07s 2025-09-11 01:02:24.173151 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 17.85s 2025-09-11 01:02:24.173161 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.97s 2025-09-11 01:02:24.173170 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.17s 2025-09-11 01:02:24.173180 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.98s 2025-09-11 01:02:24.173189 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 7.15s 2025-09-11 01:02:24.173199 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.87s 2025-09-11 01:02:24.173214 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.29s 2025-09-11 01:02:24.173223 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.91s 2025-09-11 01:02:24.173233 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.68s 2025-09-11 01:02:24.173242 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.61s 2025-09-11 01:02:24.173252 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.59s 2025-09-11 01:02:24.173261 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.51s 2025-09-11 01:02:24.173270 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.15s 2025-09-11 01:02:24.173280 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.06s 2025-09-11 01:02:24.173289 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.76s 2025-09-11 01:02:24.173298 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.48s 2025-09-11 01:02:24.173308 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.40s 2025-09-11 01:02:24.173317 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.31s 2025-09-11 01:02:24.173327 | orchestrator | 2025-09-11 01:02:24 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:02:24.173342 | orchestrator | 2025-09-11 01:02:24 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:02:24.173352 | orchestrator | 2025-09-11 01:02:24 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:02:27.196785 | orchestrator | 2025-09-11 01:02:27 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:02:27.197041 | orchestrator | 2025-09-11 01:02:27 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:02:27.197865 | orchestrator | 2025-09-11 01:02:27 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:02:27.198716 | orchestrator | 2025-09-11 01:02:27 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:02:27.198785 | orchestrator | 2025-09-11 01:02:27 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:02:30.225508 | orchestrator | 2025-09-11 01:02:30 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:02:30.226090 | orchestrator | 2025-09-11 01:02:30 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:02:30.226807 | orchestrator | 2025-09-11 01:02:30 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:02:30.227806 | orchestrator | 2025-09-11 01:02:30 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:02:30.227828 | orchestrator | 2025-09-11 01:02:30 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:02:33.260041 | orchestrator | 2025-09-11 01:02:33 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:02:33.260232 | orchestrator | 2025-09-11 01:02:33 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:02:33.260757 | orchestrator | 2025-09-11 01:02:33 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:02:33.261334 | orchestrator | 2025-09-11 01:02:33 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:02:33.261433 | orchestrator | 2025-09-11 01:02:33 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:02:36.316348 | orchestrator | 2025-09-11 01:02:36 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:02:36.317399 | orchestrator | 2025-09-11 01:02:36 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:02:36.318317 | orchestrator | 2025-09-11 01:02:36 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:02:36.318953 | orchestrator | 2025-09-11 01:02:36 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:02:36.318984 | orchestrator | 2025-09-11 01:02:36 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:02:39.343923 | orchestrator | 2025-09-11 01:02:39 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:02:39.344276 | orchestrator | 2025-09-11 01:02:39 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:02:39.345583 | orchestrator | 2025-09-11 01:02:39 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:02:39.346295 | orchestrator | 2025-09-11 01:02:39 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:02:39.346321 | orchestrator | 2025-09-11 01:02:39 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:02:42.371624 | orchestrator | 2025-09-11 01:02:42 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:02:42.371958 | orchestrator | 2025-09-11 01:02:42 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:02:42.372642 | orchestrator | 2025-09-11 01:02:42 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:02:42.373270 | orchestrator | 2025-09-11 01:02:42 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:02:42.373338 | orchestrator | 2025-09-11 01:02:42 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:02:45.398140 | orchestrator | 2025-09-11 01:02:45 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:02:45.400144 | orchestrator | 2025-09-11 01:02:45 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:02:45.400710 | orchestrator | 2025-09-11 01:02:45 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:02:45.401918 | orchestrator | 2025-09-11 01:02:45 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:02:45.401941 | orchestrator | 2025-09-11 01:02:45 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:02:48.423645 | orchestrator | 2025-09-11 01:02:48 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:02:48.424766 | orchestrator | 2025-09-11 01:02:48 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:02:48.425257 | orchestrator | 2025-09-11 01:02:48 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:02:48.425882 | orchestrator | 2025-09-11 01:02:48 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:02:48.425906 | orchestrator | 2025-09-11 01:02:48 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:02:51.455409 | orchestrator | 2025-09-11 01:02:51 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:02:51.455599 | orchestrator | 2025-09-11 01:02:51 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:02:51.456209 | orchestrator | 2025-09-11 01:02:51 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:02:51.456613 | orchestrator | 2025-09-11 01:02:51 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:02:51.456641 | orchestrator | 2025-09-11 01:02:51 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:02:54.478296 | orchestrator | 2025-09-11 01:02:54 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:02:54.478389 | orchestrator | 2025-09-11 01:02:54 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:02:54.478899 | orchestrator | 2025-09-11 01:02:54 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:02:54.479891 | orchestrator | 2025-09-11 01:02:54 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:02:54.479912 | orchestrator | 2025-09-11 01:02:54 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:02:57.503611 | orchestrator | 2025-09-11 01:02:57 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:02:57.503666 | orchestrator | 2025-09-11 01:02:57 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:02:57.504968 | orchestrator | 2025-09-11 01:02:57 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:02:57.505532 | orchestrator | 2025-09-11 01:02:57 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:02:57.505636 | orchestrator | 2025-09-11 01:02:57 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:03:00.531928 | orchestrator | 2025-09-11 01:03:00 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:03:00.533304 | orchestrator | 2025-09-11 01:03:00 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:03:00.533825 | orchestrator | 2025-09-11 01:03:00 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:03:00.534437 | orchestrator | 2025-09-11 01:03:00 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:03:00.534461 | orchestrator | 2025-09-11 01:03:00 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:03:03.576045 | orchestrator | 2025-09-11 01:03:03 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:03:03.576369 | orchestrator | 2025-09-11 01:03:03 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:03:03.576677 | orchestrator | 2025-09-11 01:03:03 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:03:03.577286 | orchestrator | 2025-09-11 01:03:03 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:03:03.577307 | orchestrator | 2025-09-11 01:03:03 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:03:06.597597 | orchestrator | 2025-09-11 01:03:06 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:03:06.597973 | orchestrator | 2025-09-11 01:03:06 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:03:06.598687 | orchestrator | 2025-09-11 01:03:06 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:03:06.599924 | orchestrator | 2025-09-11 01:03:06 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:03:06.599968 | orchestrator | 2025-09-11 01:03:06 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:03:09.639496 | orchestrator | 2025-09-11 01:03:09 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:03:09.639646 | orchestrator | 2025-09-11 01:03:09 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:03:09.640245 | orchestrator | 2025-09-11 01:03:09 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:03:09.640967 | orchestrator | 2025-09-11 01:03:09 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:03:09.640988 | orchestrator | 2025-09-11 01:03:09 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:03:12.673592 | orchestrator | 2025-09-11 01:03:12 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:03:12.673783 | orchestrator | 2025-09-11 01:03:12 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:03:12.674256 | orchestrator | 2025-09-11 01:03:12 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:03:12.675451 | orchestrator | 2025-09-11 01:03:12 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:03:12.675474 | orchestrator | 2025-09-11 01:03:12 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:03:15.698730 | orchestrator | 2025-09-11 01:03:15 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:03:15.698929 | orchestrator | 2025-09-11 01:03:15 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:03:15.699642 | orchestrator | 2025-09-11 01:03:15 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:03:15.700347 | orchestrator | 2025-09-11 01:03:15 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:03:15.700369 | orchestrator | 2025-09-11 01:03:15 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:03:18.725792 | orchestrator | 2025-09-11 01:03:18 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:03:18.725969 | orchestrator | 2025-09-11 01:03:18 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:03:18.726574 | orchestrator | 2025-09-11 01:03:18 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:03:18.727932 | orchestrator | 2025-09-11 01:03:18 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:03:18.727955 | orchestrator | 2025-09-11 01:03:18 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:03:21.753032 | orchestrator | 2025-09-11 01:03:21 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:03:21.753442 | orchestrator | 2025-09-11 01:03:21 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:03:21.754152 | orchestrator | 2025-09-11 01:03:21 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:03:21.754641 | orchestrator | 2025-09-11 01:03:21 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:03:21.754664 | orchestrator | 2025-09-11 01:03:21 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:03:24.778265 | orchestrator | 2025-09-11 01:03:24 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:03:24.778372 | orchestrator | 2025-09-11 01:03:24 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:03:24.780131 | orchestrator | 2025-09-11 01:03:24 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:03:24.780505 | orchestrator | 2025-09-11 01:03:24 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:03:24.780527 | orchestrator | 2025-09-11 01:03:24 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:03:27.804318 | orchestrator | 2025-09-11 01:03:27 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:03:27.804476 | orchestrator | 2025-09-11 01:03:27 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:03:27.804886 | orchestrator | 2025-09-11 01:03:27 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:03:27.805475 | orchestrator | 2025-09-11 01:03:27 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:03:27.805509 | orchestrator | 2025-09-11 01:03:27 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:03:30.824656 | orchestrator | 2025-09-11 01:03:30 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:03:30.825191 | orchestrator | 2025-09-11 01:03:30 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:03:30.826095 | orchestrator | 2025-09-11 01:03:30 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:03:30.826885 | orchestrator | 2025-09-11 01:03:30 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:03:30.826905 | orchestrator | 2025-09-11 01:03:30 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:03:33.851655 | orchestrator | 2025-09-11 01:03:33 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:03:33.853202 | orchestrator | 2025-09-11 01:03:33 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:03:33.854584 | orchestrator | 2025-09-11 01:03:33 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:03:33.855937 | orchestrator | 2025-09-11 01:03:33 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:03:33.856191 | orchestrator | 2025-09-11 01:03:33 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:03:36.876188 | orchestrator | 2025-09-11 01:03:36 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:03:36.876271 | orchestrator | 2025-09-11 01:03:36 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:03:36.876286 | orchestrator | 2025-09-11 01:03:36 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:03:36.883590 | orchestrator | 2025-09-11 01:03:36 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:03:36.883621 | orchestrator | 2025-09-11 01:03:36 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:03:39.896786 | orchestrator | 2025-09-11 01:03:39 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:03:39.900066 | orchestrator | 2025-09-11 01:03:39 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:03:39.903846 | orchestrator | 2025-09-11 01:03:39 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:03:39.905870 | orchestrator | 2025-09-11 01:03:39 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:03:39.906321 | orchestrator | 2025-09-11 01:03:39 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:03:42.932148 | orchestrator | 2025-09-11 01:03:42 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:03:42.936485 | orchestrator | 2025-09-11 01:03:42 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:03:42.938641 | orchestrator | 2025-09-11 01:03:42 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:03:42.940450 | orchestrator | 2025-09-11 01:03:42 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:03:42.940701 | orchestrator | 2025-09-11 01:03:42 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:03:45.971423 | orchestrator | 2025-09-11 01:03:45 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:03:45.971761 | orchestrator | 2025-09-11 01:03:45 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:03:45.973570 | orchestrator | 2025-09-11 01:03:45 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:03:45.974322 | orchestrator | 2025-09-11 01:03:45 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state STARTED 2025-09-11 01:03:45.974340 | orchestrator | 2025-09-11 01:03:45 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:03:48.998179 | orchestrator | 2025-09-11 01:03:48 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:03:48.998338 | orchestrator | 2025-09-11 01:03:48 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:03:48.999321 | orchestrator | 2025-09-11 01:03:49 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:03:49.000617 | orchestrator | 2025-09-11 01:03:49 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:03:49.001900 | orchestrator | 2025-09-11 01:03:49 | INFO  | Task 1510e474-67e4-475c-bf27-da639ec35316 is in state SUCCESS 2025-09-11 01:03:49.003701 | orchestrator | 2025-09-11 01:03:49.003734 | orchestrator | 2025-09-11 01:03:49.003746 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-11 01:03:49.003758 | orchestrator | 2025-09-11 01:03:49.003769 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-11 01:03:49.003781 | orchestrator | Thursday 11 September 2025 01:01:42 +0000 (0:00:00.257) 0:00:00.257 **** 2025-09-11 01:03:49.003813 | orchestrator | ok: [testbed-node-0] 2025-09-11 01:03:49.003826 | orchestrator | ok: [testbed-node-1] 2025-09-11 01:03:49.003837 | orchestrator | ok: [testbed-node-2] 2025-09-11 01:03:49.003848 | orchestrator | 2025-09-11 01:03:49.003859 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-11 01:03:49.003870 | orchestrator | Thursday 11 September 2025 01:01:42 +0000 (0:00:00.258) 0:00:00.516 **** 2025-09-11 01:03:49.003881 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-09-11 01:03:49.003892 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-09-11 01:03:49.003903 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-09-11 01:03:49.003914 | orchestrator | 2025-09-11 01:03:49.003925 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-09-11 01:03:49.003936 | orchestrator | 2025-09-11 01:03:49.003947 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-11 01:03:49.003993 | orchestrator | Thursday 11 September 2025 01:01:43 +0000 (0:00:00.401) 0:00:00.918 **** 2025-09-11 01:03:49.004005 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 01:03:49.004016 | orchestrator | 2025-09-11 01:03:49.004027 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-09-11 01:03:49.004063 | orchestrator | Thursday 11 September 2025 01:01:43 +0000 (0:00:00.496) 0:00:01.414 **** 2025-09-11 01:03:49.004075 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-09-11 01:03:49.004086 | orchestrator | 2025-09-11 01:03:49.004097 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-09-11 01:03:49.004108 | orchestrator | Thursday 11 September 2025 01:01:47 +0000 (0:00:03.651) 0:00:05.066 **** 2025-09-11 01:03:49.004118 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-09-11 01:03:49.004129 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-09-11 01:03:49.004140 | orchestrator | 2025-09-11 01:03:49.004151 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-09-11 01:03:49.004162 | orchestrator | Thursday 11 September 2025 01:01:54 +0000 (0:00:06.678) 0:00:11.744 **** 2025-09-11 01:03:49.004173 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-11 01:03:49.004184 | orchestrator | 2025-09-11 01:03:49.004195 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-09-11 01:03:49.004206 | orchestrator | Thursday 11 September 2025 01:01:57 +0000 (0:00:03.509) 0:00:15.254 **** 2025-09-11 01:03:49.004217 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-11 01:03:49.004228 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-09-11 01:03:49.004238 | orchestrator | 2025-09-11 01:03:49.004262 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-09-11 01:03:49.004273 | orchestrator | Thursday 11 September 2025 01:02:01 +0000 (0:00:03.978) 0:00:19.233 **** 2025-09-11 01:03:49.004284 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-11 01:03:49.004295 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-09-11 01:03:49.004307 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-09-11 01:03:49.004320 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-09-11 01:03:49.004333 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-09-11 01:03:49.004346 | orchestrator | 2025-09-11 01:03:49.004358 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-09-11 01:03:49.004384 | orchestrator | Thursday 11 September 2025 01:02:18 +0000 (0:00:16.724) 0:00:35.958 **** 2025-09-11 01:03:49.004396 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-09-11 01:03:49.004408 | orchestrator | 2025-09-11 01:03:49.004421 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-09-11 01:03:49.004434 | orchestrator | Thursday 11 September 2025 01:02:22 +0000 (0:00:04.444) 0:00:40.402 **** 2025-09-11 01:03:49.004450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-11 01:03:49.004480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-11 01:03:49.004495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-11 01:03:49.004515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-11 01:03:49.004537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-11 01:03:49.004551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-11 01:03:49.004571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:03:49.004586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:03:49.004599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:03:49.004613 | orchestrator | 2025-09-11 01:03:49.004626 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-09-11 01:03:49.004639 | orchestrator | Thursday 11 September 2025 01:02:24 +0000 (0:00:01.907) 0:00:42.310 **** 2025-09-11 01:03:49.004652 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-09-11 01:03:49.004665 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-09-11 01:03:49.004678 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-09-11 01:03:49.004691 | orchestrator | 2025-09-11 01:03:49.004702 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-09-11 01:03:49.004712 | orchestrator | Thursday 11 September 2025 01:02:25 +0000 (0:00:01.255) 0:00:43.566 **** 2025-09-11 01:03:49.004723 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:03:49.004734 | orchestrator | 2025-09-11 01:03:49.004751 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-09-11 01:03:49.004762 | orchestrator | Thursday 11 September 2025 01:02:26 +0000 (0:00:00.102) 0:00:43.669 **** 2025-09-11 01:03:49.004773 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:03:49.004784 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:03:49.004794 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:03:49.004805 | orchestrator | 2025-09-11 01:03:49.004821 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-11 01:03:49.004832 | orchestrator | Thursday 11 September 2025 01:02:26 +0000 (0:00:00.359) 0:00:44.028 **** 2025-09-11 01:03:49.004843 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 01:03:49.004854 | orchestrator | 2025-09-11 01:03:49.004864 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-09-11 01:03:49.004875 | orchestrator | Thursday 11 September 2025 01:02:27 +0000 (0:00:00.732) 0:00:44.760 **** 2025-09-11 01:03:49.004886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-11 01:03:49.004905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-11 01:03:49.004917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-11 01:03:49.004929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-11 01:03:49.004952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-11 01:03:49.004964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-11 01:03:49.004976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:03:49.004994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:03:49.005006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:03:49.005017 | orchestrator | 2025-09-11 01:03:49.005029 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-09-11 01:03:49.005068 | orchestrator | Thursday 11 September 2025 01:02:31 +0000 (0:00:04.051) 0:00:48.811 **** 2025-09-11 01:03:49.005080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-11 01:03:49.005102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-11 01:03:49.005114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-11 01:03:49.005126 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:03:49.005144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-11 01:03:49.005156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-11 01:03:49.005168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-11 01:03:49.005185 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:03:49.005201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-11 01:03:49.005213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-11 01:03:49.005224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-11 01:03:49.005235 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:03:49.005247 | orchestrator | 2025-09-11 01:03:49.005257 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-09-11 01:03:49.005268 | orchestrator | Thursday 11 September 2025 01:02:33 +0000 (0:00:02.036) 0:00:50.848 **** 2025-09-11 01:03:49.005287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-11 01:03:49.005299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-11 01:03:49.005317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-11 01:03:49.005328 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:03:49.005350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-11 01:03:49.005362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-11 01:03:49.005373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-11 01:03:49.005385 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:03:49.005403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-11 01:03:49.005422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-11 01:03:49.005433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-11 01:03:49.005444 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:03:49.005455 | orchestrator | 2025-09-11 01:03:49.005470 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-09-11 01:03:49.005482 | orchestrator | Thursday 11 September 2025 01:02:34 +0000 (0:00:01.191) 0:00:52.040 **** 2025-09-11 01:03:49.005493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-11 01:03:49.005741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-11 01:03:49.005759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-11 01:03:49.005779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-11 01:03:49.005797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-11 01:03:49.005809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-11 01:03:49.005820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:03:49.005838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:03:49.005849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:03:49.005870 | orchestrator | 2025-09-11 01:03:49.005881 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-09-11 01:03:49.005891 | orchestrator | Thursday 11 September 2025 01:02:37 +0000 (0:00:03.477) 0:00:55.518 **** 2025-09-11 01:03:49.005902 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:03:49.005913 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:03:49.005924 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:03:49.005935 | orchestrator | 2025-09-11 01:03:49.005945 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-09-11 01:03:49.005956 | orchestrator | Thursday 11 September 2025 01:02:40 +0000 (0:00:02.680) 0:00:58.198 **** 2025-09-11 01:03:49.005967 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-11 01:03:49.005978 | orchestrator | 2025-09-11 01:03:49.005988 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-09-11 01:03:49.005999 | orchestrator | Thursday 11 September 2025 01:02:41 +0000 (0:00:01.108) 0:00:59.306 **** 2025-09-11 01:03:49.006010 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:03:49.006078 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:03:49.006090 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:03:49.006101 | orchestrator | 2025-09-11 01:03:49.006112 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-09-11 01:03:49.006123 | orchestrator | Thursday 11 September 2025 01:02:42 +0000 (0:00:01.167) 0:01:00.473 **** 2025-09-11 01:03:49.006139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-11 01:03:49.006151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-11 01:03:49.006170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-11 01:03:49.006189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-11 01:03:49.006201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-11 01:03:49.006212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-11 01:03:49.006228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:03:49.006240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:03:49.006251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:03:49.006279 | orchestrator | 2025-09-11 01:03:49.006291 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-09-11 01:03:49.006302 | orchestrator | Thursday 11 September 2025 01:02:51 +0000 (0:00:08.646) 0:01:09.120 **** 2025-09-11 01:03:49.006320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-11 01:03:49.006331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-11 01:03:49.006343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-11 01:03:49.006359 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:03:49.006371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-11 01:03:49.006390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-11 01:03:49.006406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-11 01:03:49.006418 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:03:49.006430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-11 01:03:49.006441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-11 01:03:49.006457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-11 01:03:49.006469 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:03:49.006480 | orchestrator | 2025-09-11 01:03:49.006491 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-09-11 01:03:49.006502 | orchestrator | Thursday 11 September 2025 01:02:52 +0000 (0:00:01.007) 0:01:10.128 **** 2025-09-11 01:03:49.006513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-11 01:03:49.006537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-11 01:03:49.006549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-11 01:03:49.006561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-11 01:03:49.006576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-11 01:03:49.006588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-11 01:03:49.006610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:03:49.006628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:03:49.006640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:03:49.006651 | orchestrator | 2025-09-11 01:03:49.006662 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-11 01:03:49.006673 | orchestrator | Thursday 11 September 2025 01:02:55 +0000 (0:00:03.144) 0:01:13.273 **** 2025-09-11 01:03:49.006683 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:03:49.006694 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:03:49.006705 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:03:49.006716 | orchestrator | 2025-09-11 01:03:49.006727 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-09-11 01:03:49.006737 | orchestrator | Thursday 11 September 2025 01:02:55 +0000 (0:00:00.321) 0:01:13.594 **** 2025-09-11 01:03:49.006748 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:03:49.006759 | orchestrator | 2025-09-11 01:03:49.006769 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-09-11 01:03:49.006780 | orchestrator | Thursday 11 September 2025 01:02:58 +0000 (0:00:02.426) 0:01:16.021 **** 2025-09-11 01:03:49.006791 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:03:49.006802 | orchestrator | 2025-09-11 01:03:49.006813 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-09-11 01:03:49.006823 | orchestrator | Thursday 11 September 2025 01:03:00 +0000 (0:00:02.466) 0:01:18.488 **** 2025-09-11 01:03:49.006834 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:03:49.006845 | orchestrator | 2025-09-11 01:03:49.006856 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-11 01:03:49.006867 | orchestrator | Thursday 11 September 2025 01:03:12 +0000 (0:00:11.569) 0:01:30.058 **** 2025-09-11 01:03:49.006883 | orchestrator | 2025-09-11 01:03:49.006894 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-11 01:03:49.006905 | orchestrator | Thursday 11 September 2025 01:03:12 +0000 (0:00:00.157) 0:01:30.215 **** 2025-09-11 01:03:49.006916 | orchestrator | 2025-09-11 01:03:49.006931 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-11 01:03:49.006942 | orchestrator | Thursday 11 September 2025 01:03:12 +0000 (0:00:00.137) 0:01:30.352 **** 2025-09-11 01:03:49.006952 | orchestrator | 2025-09-11 01:03:49.006963 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-09-11 01:03:49.006974 | orchestrator | Thursday 11 September 2025 01:03:12 +0000 (0:00:00.139) 0:01:30.491 **** 2025-09-11 01:03:49.006985 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:03:49.006995 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:03:49.007006 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:03:49.007017 | orchestrator | 2025-09-11 01:03:49.007146 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-09-11 01:03:49.007162 | orchestrator | Thursday 11 September 2025 01:03:24 +0000 (0:00:11.553) 0:01:42.045 **** 2025-09-11 01:03:49.007173 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:03:49.007184 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:03:49.007195 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:03:49.007206 | orchestrator | 2025-09-11 01:03:49.007216 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-09-11 01:03:49.007227 | orchestrator | Thursday 11 September 2025 01:03:35 +0000 (0:00:10.708) 0:01:52.753 **** 2025-09-11 01:03:49.007238 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:03:49.007248 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:03:49.007259 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:03:49.007269 | orchestrator | 2025-09-11 01:03:49.007280 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 01:03:49.007292 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-11 01:03:49.007304 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-11 01:03:49.007314 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-11 01:03:49.007325 | orchestrator | 2025-09-11 01:03:49.007336 | orchestrator | 2025-09-11 01:03:49.007347 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 01:03:49.007357 | orchestrator | Thursday 11 September 2025 01:03:47 +0000 (0:00:12.060) 0:02:04.814 **** 2025-09-11 01:03:49.007368 | orchestrator | =============================================================================== 2025-09-11 01:03:49.007379 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.72s 2025-09-11 01:03:49.007397 | orchestrator | barbican : Restart barbican-worker container --------------------------- 12.06s 2025-09-11 01:03:49.007408 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.57s 2025-09-11 01:03:49.007419 | orchestrator | barbican : Restart barbican-api container ------------------------------ 11.55s 2025-09-11 01:03:49.007430 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 10.71s 2025-09-11 01:03:49.007441 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 8.65s 2025-09-11 01:03:49.007451 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.68s 2025-09-11 01:03:49.007462 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.44s 2025-09-11 01:03:49.007472 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.05s 2025-09-11 01:03:49.007482 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.98s 2025-09-11 01:03:49.007499 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.65s 2025-09-11 01:03:49.007509 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.51s 2025-09-11 01:03:49.007518 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.48s 2025-09-11 01:03:49.007528 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.14s 2025-09-11 01:03:49.007537 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.68s 2025-09-11 01:03:49.007547 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.47s 2025-09-11 01:03:49.007556 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.43s 2025-09-11 01:03:49.007566 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 2.04s 2025-09-11 01:03:49.007575 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.91s 2025-09-11 01:03:49.007585 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.26s 2025-09-11 01:03:49.007595 | orchestrator | 2025-09-11 01:03:49 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:03:52.027809 | orchestrator | 2025-09-11 01:03:52 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:03:52.029007 | orchestrator | 2025-09-11 01:03:52 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:03:52.030237 | orchestrator | 2025-09-11 01:03:52 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:03:52.031515 | orchestrator | 2025-09-11 01:03:52 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:03:52.031779 | orchestrator | 2025-09-11 01:03:52 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:03:55.068927 | orchestrator | 2025-09-11 01:03:55 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:03:55.069350 | orchestrator | 2025-09-11 01:03:55 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:03:55.070195 | orchestrator | 2025-09-11 01:03:55 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:03:55.070952 | orchestrator | 2025-09-11 01:03:55 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:03:55.071075 | orchestrator | 2025-09-11 01:03:55 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:03:58.114703 | orchestrator | 2025-09-11 01:03:58 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:03:58.117194 | orchestrator | 2025-09-11 01:03:58 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:03:58.119119 | orchestrator | 2025-09-11 01:03:58 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:03:58.121332 | orchestrator | 2025-09-11 01:03:58 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:03:58.121361 | orchestrator | 2025-09-11 01:03:58 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:04:01.165856 | orchestrator | 2025-09-11 01:04:01 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:04:01.169341 | orchestrator | 2025-09-11 01:04:01 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:04:01.171635 | orchestrator | 2025-09-11 01:04:01 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:04:01.173284 | orchestrator | 2025-09-11 01:04:01 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:04:01.173307 | orchestrator | 2025-09-11 01:04:01 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:04:04.214472 | orchestrator | 2025-09-11 01:04:04 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:04:04.216284 | orchestrator | 2025-09-11 01:04:04 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:04:04.217183 | orchestrator | 2025-09-11 01:04:04 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:04:04.218549 | orchestrator | 2025-09-11 01:04:04 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:04:04.218576 | orchestrator | 2025-09-11 01:04:04 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:04:07.259689 | orchestrator | 2025-09-11 01:04:07 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:04:07.260084 | orchestrator | 2025-09-11 01:04:07 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:04:07.261213 | orchestrator | 2025-09-11 01:04:07 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:04:07.263005 | orchestrator | 2025-09-11 01:04:07 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:04:07.263092 | orchestrator | 2025-09-11 01:04:07 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:04:10.302411 | orchestrator | 2025-09-11 01:04:10 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:04:10.304318 | orchestrator | 2025-09-11 01:04:10 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:04:10.307007 | orchestrator | 2025-09-11 01:04:10 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:04:10.308710 | orchestrator | 2025-09-11 01:04:10 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:04:10.309341 | orchestrator | 2025-09-11 01:04:10 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:04:13.348122 | orchestrator | 2025-09-11 01:04:13 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:04:13.348520 | orchestrator | 2025-09-11 01:04:13 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:04:13.350109 | orchestrator | 2025-09-11 01:04:13 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:04:13.350862 | orchestrator | 2025-09-11 01:04:13 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:04:13.350886 | orchestrator | 2025-09-11 01:04:13 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:04:16.407729 | orchestrator | 2025-09-11 01:04:16 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:04:16.411196 | orchestrator | 2025-09-11 01:04:16 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:04:16.411231 | orchestrator | 2025-09-11 01:04:16 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:04:16.411245 | orchestrator | 2025-09-11 01:04:16 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:04:16.411257 | orchestrator | 2025-09-11 01:04:16 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:04:19.443503 | orchestrator | 2025-09-11 01:04:19 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:04:19.444200 | orchestrator | 2025-09-11 01:04:19 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:04:19.444847 | orchestrator | 2025-09-11 01:04:19 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:04:19.445712 | orchestrator | 2025-09-11 01:04:19 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:04:19.445757 | orchestrator | 2025-09-11 01:04:19 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:04:22.469381 | orchestrator | 2025-09-11 01:04:22 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:04:22.469755 | orchestrator | 2025-09-11 01:04:22 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:04:22.470591 | orchestrator | 2025-09-11 01:04:22 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:04:22.471462 | orchestrator | 2025-09-11 01:04:22 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:04:22.471483 | orchestrator | 2025-09-11 01:04:22 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:04:25.509478 | orchestrator | 2025-09-11 01:04:25 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:04:25.509565 | orchestrator | 2025-09-11 01:04:25 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:04:25.512958 | orchestrator | 2025-09-11 01:04:25 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:04:25.513468 | orchestrator | 2025-09-11 01:04:25 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:04:25.513490 | orchestrator | 2025-09-11 01:04:25 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:04:28.536241 | orchestrator | 2025-09-11 01:04:28 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:04:28.536733 | orchestrator | 2025-09-11 01:04:28 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:04:28.537509 | orchestrator | 2025-09-11 01:04:28 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:04:28.538420 | orchestrator | 2025-09-11 01:04:28 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:04:28.538496 | orchestrator | 2025-09-11 01:04:28 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:04:31.557792 | orchestrator | 2025-09-11 01:04:31 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:04:31.558220 | orchestrator | 2025-09-11 01:04:31 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:04:31.559117 | orchestrator | 2025-09-11 01:04:31 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:04:31.559806 | orchestrator | 2025-09-11 01:04:31 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:04:31.559936 | orchestrator | 2025-09-11 01:04:31 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:04:34.585060 | orchestrator | 2025-09-11 01:04:34 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:04:34.586202 | orchestrator | 2025-09-11 01:04:34 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:04:34.587270 | orchestrator | 2025-09-11 01:04:34 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:04:34.588965 | orchestrator | 2025-09-11 01:04:34 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:04:34.589061 | orchestrator | 2025-09-11 01:04:34 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:04:37.624449 | orchestrator | 2025-09-11 01:04:37 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:04:37.624904 | orchestrator | 2025-09-11 01:04:37 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:04:37.626605 | orchestrator | 2025-09-11 01:04:37 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:04:37.627217 | orchestrator | 2025-09-11 01:04:37 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:04:37.627257 | orchestrator | 2025-09-11 01:04:37 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:04:40.658281 | orchestrator | 2025-09-11 01:04:40 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:04:40.659122 | orchestrator | 2025-09-11 01:04:40 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:04:40.659660 | orchestrator | 2025-09-11 01:04:40 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:04:40.660468 | orchestrator | 2025-09-11 01:04:40 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:04:40.660495 | orchestrator | 2025-09-11 01:04:40 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:04:43.695105 | orchestrator | 2025-09-11 01:04:43 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:04:43.696093 | orchestrator | 2025-09-11 01:04:43 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:04:43.700844 | orchestrator | 2025-09-11 01:04:43 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:04:43.703841 | orchestrator | 2025-09-11 01:04:43 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:04:43.703888 | orchestrator | 2025-09-11 01:04:43 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:04:46.736395 | orchestrator | 2025-09-11 01:04:46 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:04:46.738250 | orchestrator | 2025-09-11 01:04:46 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:04:46.740184 | orchestrator | 2025-09-11 01:04:46 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:04:46.742521 | orchestrator | 2025-09-11 01:04:46 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:04:46.743068 | orchestrator | 2025-09-11 01:04:46 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:04:49.778831 | orchestrator | 2025-09-11 01:04:49 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:04:49.778931 | orchestrator | 2025-09-11 01:04:49 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:04:49.779520 | orchestrator | 2025-09-11 01:04:49 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:04:49.781191 | orchestrator | 2025-09-11 01:04:49 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:04:49.781215 | orchestrator | 2025-09-11 01:04:49 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:04:52.798499 | orchestrator | 2025-09-11 01:04:52 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:04:52.798682 | orchestrator | 2025-09-11 01:04:52 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:04:52.799196 | orchestrator | 2025-09-11 01:04:52 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:04:52.799874 | orchestrator | 2025-09-11 01:04:52 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:04:52.799905 | orchestrator | 2025-09-11 01:04:52 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:04:55.821077 | orchestrator | 2025-09-11 01:04:55 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:04:55.821469 | orchestrator | 2025-09-11 01:04:55 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:04:55.821689 | orchestrator | 2025-09-11 01:04:55 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:04:55.822287 | orchestrator | 2025-09-11 01:04:55 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:04:55.822464 | orchestrator | 2025-09-11 01:04:55 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:04:58.861127 | orchestrator | 2025-09-11 01:04:58 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:04:58.863223 | orchestrator | 2025-09-11 01:04:58 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:04:58.863260 | orchestrator | 2025-09-11 01:04:58 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:04:58.863272 | orchestrator | 2025-09-11 01:04:58 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:04:58.863283 | orchestrator | 2025-09-11 01:04:58 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:05:01.911746 | orchestrator | 2025-09-11 01:05:01 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:05:01.912218 | orchestrator | 2025-09-11 01:05:01 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:05:01.913928 | orchestrator | 2025-09-11 01:05:01 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:05:01.915768 | orchestrator | 2025-09-11 01:05:01 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:05:01.915852 | orchestrator | 2025-09-11 01:05:01 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:05:04.960539 | orchestrator | 2025-09-11 01:05:04 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:05:04.960897 | orchestrator | 2025-09-11 01:05:04 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:05:04.961713 | orchestrator | 2025-09-11 01:05:04 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:05:04.962431 | orchestrator | 2025-09-11 01:05:04 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:05:04.962458 | orchestrator | 2025-09-11 01:05:04 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:05:07.990192 | orchestrator | 2025-09-11 01:05:07 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:05:07.993859 | orchestrator | 2025-09-11 01:05:07 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:05:07.997373 | orchestrator | 2025-09-11 01:05:07 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:05:08.000440 | orchestrator | 2025-09-11 01:05:08 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:05:08.000467 | orchestrator | 2025-09-11 01:05:08 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:05:11.043603 | orchestrator | 2025-09-11 01:05:11 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:05:11.046132 | orchestrator | 2025-09-11 01:05:11 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:05:11.048217 | orchestrator | 2025-09-11 01:05:11 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:05:11.050366 | orchestrator | 2025-09-11 01:05:11 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:05:11.050414 | orchestrator | 2025-09-11 01:05:11 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:05:14.086970 | orchestrator | 2025-09-11 01:05:14 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:05:14.088462 | orchestrator | 2025-09-11 01:05:14 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state STARTED 2025-09-11 01:05:14.090352 | orchestrator | 2025-09-11 01:05:14 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:05:14.091622 | orchestrator | 2025-09-11 01:05:14 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:05:14.091653 | orchestrator | 2025-09-11 01:05:14 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:05:17.130259 | orchestrator | 2025-09-11 01:05:17 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:05:17.132348 | orchestrator | 2025-09-11 01:05:17 | INFO  | Task c4c84254-ea5b-41cf-9fa8-0add6982f0ce is in state SUCCESS 2025-09-11 01:05:17.133839 | orchestrator | 2025-09-11 01:05:17.133874 | orchestrator | 2025-09-11 01:05:17.133909 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-11 01:05:17.133922 | orchestrator | 2025-09-11 01:05:17.133936 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-11 01:05:17.133956 | orchestrator | Thursday 11 September 2025 01:01:27 +0000 (0:00:00.234) 0:00:00.234 **** 2025-09-11 01:05:17.134004 | orchestrator | ok: [testbed-node-0] 2025-09-11 01:05:17.134077 | orchestrator | ok: [testbed-node-1] 2025-09-11 01:05:17.134090 | orchestrator | ok: [testbed-node-2] 2025-09-11 01:05:17.134101 | orchestrator | ok: [testbed-node-3] 2025-09-11 01:05:17.134168 | orchestrator | ok: [testbed-node-4] 2025-09-11 01:05:17.134179 | orchestrator | ok: [testbed-node-5] 2025-09-11 01:05:17.134190 | orchestrator | 2025-09-11 01:05:17.134201 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-11 01:05:17.134212 | orchestrator | Thursday 11 September 2025 01:01:27 +0000 (0:00:00.567) 0:00:00.801 **** 2025-09-11 01:05:17.134237 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-09-11 01:05:17.134249 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-09-11 01:05:17.134260 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-09-11 01:05:17.134272 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-09-11 01:05:17.134282 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-09-11 01:05:17.134293 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-09-11 01:05:17.134304 | orchestrator | 2025-09-11 01:05:17.134341 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-09-11 01:05:17.134354 | orchestrator | 2025-09-11 01:05:17.134365 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-11 01:05:17.134388 | orchestrator | Thursday 11 September 2025 01:01:28 +0000 (0:00:00.525) 0:00:01.327 **** 2025-09-11 01:05:17.134400 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 01:05:17.134412 | orchestrator | 2025-09-11 01:05:17.134423 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-09-11 01:05:17.134435 | orchestrator | Thursday 11 September 2025 01:01:29 +0000 (0:00:01.004) 0:00:02.331 **** 2025-09-11 01:05:17.134447 | orchestrator | ok: [testbed-node-1] 2025-09-11 01:05:17.134460 | orchestrator | ok: [testbed-node-0] 2025-09-11 01:05:17.134472 | orchestrator | ok: [testbed-node-2] 2025-09-11 01:05:17.134485 | orchestrator | ok: [testbed-node-3] 2025-09-11 01:05:17.134497 | orchestrator | ok: [testbed-node-4] 2025-09-11 01:05:17.134509 | orchestrator | ok: [testbed-node-5] 2025-09-11 01:05:17.134521 | orchestrator | 2025-09-11 01:05:17.134534 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-09-11 01:05:17.134568 | orchestrator | Thursday 11 September 2025 01:01:30 +0000 (0:00:01.108) 0:00:03.440 **** 2025-09-11 01:05:17.134580 | orchestrator | ok: [testbed-node-0] 2025-09-11 01:05:17.134590 | orchestrator | ok: [testbed-node-1] 2025-09-11 01:05:17.134601 | orchestrator | ok: [testbed-node-2] 2025-09-11 01:05:17.134616 | orchestrator | ok: [testbed-node-3] 2025-09-11 01:05:17.134633 | orchestrator | ok: [testbed-node-4] 2025-09-11 01:05:17.134651 | orchestrator | ok: [testbed-node-5] 2025-09-11 01:05:17.134669 | orchestrator | 2025-09-11 01:05:17.134688 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-09-11 01:05:17.134701 | orchestrator | Thursday 11 September 2025 01:01:31 +0000 (0:00:00.977) 0:00:04.417 **** 2025-09-11 01:05:17.134712 | orchestrator | ok: [testbed-node-0] => { 2025-09-11 01:05:17.134724 | orchestrator |  "changed": false, 2025-09-11 01:05:17.134734 | orchestrator |  "msg": "All assertions passed" 2025-09-11 01:05:17.134746 | orchestrator | } 2025-09-11 01:05:17.134756 | orchestrator | ok: [testbed-node-1] => { 2025-09-11 01:05:17.134767 | orchestrator |  "changed": false, 2025-09-11 01:05:17.134778 | orchestrator |  "msg": "All assertions passed" 2025-09-11 01:05:17.134789 | orchestrator | } 2025-09-11 01:05:17.134799 | orchestrator | ok: [testbed-node-2] => { 2025-09-11 01:05:17.134810 | orchestrator |  "changed": false, 2025-09-11 01:05:17.134821 | orchestrator |  "msg": "All assertions passed" 2025-09-11 01:05:17.134832 | orchestrator | } 2025-09-11 01:05:17.134842 | orchestrator | ok: [testbed-node-3] => { 2025-09-11 01:05:17.134853 | orchestrator |  "changed": false, 2025-09-11 01:05:17.134864 | orchestrator |  "msg": "All assertions passed" 2025-09-11 01:05:17.134874 | orchestrator | } 2025-09-11 01:05:17.134885 | orchestrator | ok: [testbed-node-4] => { 2025-09-11 01:05:17.134895 | orchestrator |  "changed": false, 2025-09-11 01:05:17.134906 | orchestrator |  "msg": "All assertions passed" 2025-09-11 01:05:17.134917 | orchestrator | } 2025-09-11 01:05:17.134927 | orchestrator | ok: [testbed-node-5] => { 2025-09-11 01:05:17.134938 | orchestrator |  "changed": false, 2025-09-11 01:05:17.134949 | orchestrator |  "msg": "All assertions passed" 2025-09-11 01:05:17.134960 | orchestrator | } 2025-09-11 01:05:17.134970 | orchestrator | 2025-09-11 01:05:17.135018 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-09-11 01:05:17.135029 | orchestrator | Thursday 11 September 2025 01:01:32 +0000 (0:00:01.094) 0:00:05.512 **** 2025-09-11 01:05:17.135040 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:05:17.135051 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:05:17.135061 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:05:17.135072 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:05:17.135082 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:05:17.135093 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:05:17.135104 | orchestrator | 2025-09-11 01:05:17.135114 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-09-11 01:05:17.135125 | orchestrator | Thursday 11 September 2025 01:01:33 +0000 (0:00:01.317) 0:00:06.829 **** 2025-09-11 01:05:17.135136 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-09-11 01:05:17.135147 | orchestrator | 2025-09-11 01:05:17.135157 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-09-11 01:05:17.135168 | orchestrator | Thursday 11 September 2025 01:01:37 +0000 (0:00:03.497) 0:00:10.326 **** 2025-09-11 01:05:17.135179 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-09-11 01:05:17.135191 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-09-11 01:05:17.135202 | orchestrator | 2025-09-11 01:05:17.135228 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-09-11 01:05:17.135240 | orchestrator | Thursday 11 September 2025 01:01:44 +0000 (0:00:06.724) 0:00:17.051 **** 2025-09-11 01:05:17.135252 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-11 01:05:17.135272 | orchestrator | 2025-09-11 01:05:17.135283 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-09-11 01:05:17.135293 | orchestrator | Thursday 11 September 2025 01:01:47 +0000 (0:00:03.494) 0:00:20.546 **** 2025-09-11 01:05:17.135304 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-11 01:05:17.135315 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-09-11 01:05:17.135326 | orchestrator | 2025-09-11 01:05:17.135337 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-09-11 01:05:17.135348 | orchestrator | Thursday 11 September 2025 01:01:51 +0000 (0:00:03.892) 0:00:24.438 **** 2025-09-11 01:05:17.135365 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-11 01:05:17.135376 | orchestrator | 2025-09-11 01:05:17.135387 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-09-11 01:05:17.135398 | orchestrator | Thursday 11 September 2025 01:01:54 +0000 (0:00:03.570) 0:00:28.009 **** 2025-09-11 01:05:17.135408 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-09-11 01:05:17.135419 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-09-11 01:05:17.135430 | orchestrator | 2025-09-11 01:05:17.135440 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-11 01:05:17.135451 | orchestrator | Thursday 11 September 2025 01:02:02 +0000 (0:00:07.747) 0:00:35.756 **** 2025-09-11 01:05:17.135462 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:05:17.135472 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:05:17.135483 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:05:17.135494 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:05:17.135504 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:05:17.135515 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:05:17.135526 | orchestrator | 2025-09-11 01:05:17.135536 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-09-11 01:05:17.135547 | orchestrator | Thursday 11 September 2025 01:02:03 +0000 (0:00:00.675) 0:00:36.432 **** 2025-09-11 01:05:17.135558 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:05:17.135568 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:05:17.135579 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:05:17.135590 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:05:17.135600 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:05:17.135611 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:05:17.135621 | orchestrator | 2025-09-11 01:05:17.135632 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-09-11 01:05:17.135643 | orchestrator | Thursday 11 September 2025 01:02:05 +0000 (0:00:02.023) 0:00:38.455 **** 2025-09-11 01:05:17.135654 | orchestrator | ok: [testbed-node-1] 2025-09-11 01:05:17.135665 | orchestrator | ok: [testbed-node-0] 2025-09-11 01:05:17.135675 | orchestrator | ok: [testbed-node-2] 2025-09-11 01:05:17.135686 | orchestrator | ok: [testbed-node-3] 2025-09-11 01:05:17.135697 | orchestrator | ok: [testbed-node-4] 2025-09-11 01:05:17.135708 | orchestrator | ok: [testbed-node-5] 2025-09-11 01:05:17.135718 | orchestrator | 2025-09-11 01:05:17.135729 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-11 01:05:17.135753 | orchestrator | Thursday 11 September 2025 01:02:06 +0000 (0:00:01.006) 0:00:39.462 **** 2025-09-11 01:05:17.135765 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:05:17.135775 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:05:17.135786 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:05:17.135797 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:05:17.135807 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:05:17.135818 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:05:17.135829 | orchestrator | 2025-09-11 01:05:17.135839 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-09-11 01:05:17.135850 | orchestrator | Thursday 11 September 2025 01:02:08 +0000 (0:00:02.145) 0:00:41.608 **** 2025-09-11 01:05:17.135875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-11 01:05:17.135907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-11 01:05:17.135924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-11 01:05:17.135937 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-11 01:05:17.135949 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-11 01:05:17.135967 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-11 01:05:17.135995 | orchestrator | 2025-09-11 01:05:17.136006 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-09-11 01:05:17.136017 | orchestrator | Thursday 11 September 2025 01:02:11 +0000 (0:00:02.732) 0:00:44.340 **** 2025-09-11 01:05:17.136028 | orchestrator | [WARNING]: Skipped 2025-09-11 01:05:17.136040 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-09-11 01:05:17.136051 | orchestrator | due to this access issue: 2025-09-11 01:05:17.136062 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-09-11 01:05:17.136072 | orchestrator | a directory 2025-09-11 01:05:17.136083 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-11 01:05:17.136094 | orchestrator | 2025-09-11 01:05:17.136105 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-11 01:05:17.136122 | orchestrator | Thursday 11 September 2025 01:02:12 +0000 (0:00:00.820) 0:00:45.161 **** 2025-09-11 01:05:17.136133 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 01:05:17.136145 | orchestrator | 2025-09-11 01:05:17.136156 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-09-11 01:05:17.136167 | orchestrator | Thursday 11 September 2025 01:02:13 +0000 (0:00:01.153) 0:00:46.314 **** 2025-09-11 01:05:17.136195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-11 01:05:17.136209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-11 01:05:17.136227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-11 01:05:17.136238 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-11 01:05:17.136258 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-11 01:05:17.136275 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-11 01:05:17.136287 | orchestrator | 2025-09-11 01:05:17.136298 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-09-11 01:05:17.136309 | orchestrator | Thursday 11 September 2025 01:02:17 +0000 (0:00:04.227) 0:00:50.542 **** 2025-09-11 01:05:17.136320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-11 01:05:17.136338 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:05:17.136364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-11 01:05:17.136386 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:05:17.136398 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-11 01:05:17.136409 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:05:17.136432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-11 01:05:17.136445 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:05:17.136456 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-11 01:05:17.136474 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:05:17.136485 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-11 01:05:17.136497 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:05:17.136508 | orchestrator | 2025-09-11 01:05:17.136519 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-09-11 01:05:17.136530 | orchestrator | Thursday 11 September 2025 01:02:19 +0000 (0:00:01.858) 0:00:52.400 **** 2025-09-11 01:05:17.136541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-11 01:05:17.136553 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:05:17.136572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-11 01:05:17.136583 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:05:17.136599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-11 01:05:17.136610 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:05:17.136627 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-11 01:05:17.136639 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:05:17.136650 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-11 01:05:17.136661 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:05:17.136673 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-11 01:05:17.136684 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:05:17.136695 | orchestrator | 2025-09-11 01:05:17.136706 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-09-11 01:05:17.136717 | orchestrator | Thursday 11 September 2025 01:02:21 +0000 (0:00:02.096) 0:00:54.496 **** 2025-09-11 01:05:17.136728 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:05:17.136739 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:05:17.136749 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:05:17.136760 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:05:17.136771 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:05:17.136782 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:05:17.136792 | orchestrator | 2025-09-11 01:05:17.136803 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-09-11 01:05:17.136820 | orchestrator | Thursday 11 September 2025 01:02:23 +0000 (0:00:02.088) 0:00:56.584 **** 2025-09-11 01:05:17.136831 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:05:17.136842 | orchestrator | 2025-09-11 01:05:17.136853 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-09-11 01:05:17.136864 | orchestrator | Thursday 11 September 2025 01:02:23 +0000 (0:00:00.120) 0:00:56.705 **** 2025-09-11 01:05:17.136874 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:05:17.136885 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:05:17.136896 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:05:17.136906 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:05:17.136917 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:05:17.136934 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:05:17.136945 | orchestrator | 2025-09-11 01:05:17.136955 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-09-11 01:05:17.136966 | orchestrator | Thursday 11 September 2025 01:02:24 +0000 (0:00:00.611) 0:00:57.316 **** 2025-09-11 01:05:17.137034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-11 01:05:17.137047 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:05:17.137058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-11 01:05:17.137069 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:05:17.137080 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-11 01:05:17.137092 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:05:17.137521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-11 01:05:17.137538 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:05:17.137562 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-11 01:05:17.137572 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:05:17.137582 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-11 01:05:17.137592 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:05:17.137602 | orchestrator | 2025-09-11 01:05:17.137611 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-09-11 01:05:17.137621 | orchestrator | Thursday 11 September 2025 01:02:26 +0000 (0:00:02.087) 0:00:59.404 **** 2025-09-11 01:05:17.137631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-11 01:05:17.137641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-11 01:05:17.137659 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-11 01:05:17.137680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-11 01:05:17.137691 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-11 01:05:17.137701 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-11 01:05:17.137711 | orchestrator | 2025-09-11 01:05:17.137721 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-09-11 01:05:17.137731 | orchestrator | Thursday 11 September 2025 01:02:29 +0000 (0:00:03.351) 0:01:02.756 **** 2025-09-11 01:05:17.137740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-11 01:05:17.137763 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-11 01:05:17.137782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-11 01:05:17.137792 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-11 01:05:17.137803 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-11 01:05:17.137813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-11 01:05:17.137844 | orchestrator | 2025-09-11 01:05:17.137853 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-09-11 01:05:17.137863 | orchestrator | Thursday 11 September 2025 01:02:35 +0000 (0:00:05.380) 0:01:08.137 **** 2025-09-11 01:05:17.137879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-11 01:05:17.137890 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:05:17.137904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-11 01:05:17.137914 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:05:17.137924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-11 01:05:17.137934 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:05:17.137944 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-11 01:05:17.137954 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:05:17.137969 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-11 01:05:17.138063 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:05:17.138081 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-11 01:05:17.138092 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:05:17.138102 | orchestrator | 2025-09-11 01:05:17.138112 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-09-11 01:05:17.138124 | orchestrator | Thursday 11 September 2025 01:02:37 +0000 (0:00:02.412) 0:01:10.549 **** 2025-09-11 01:05:17.138135 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:05:17.138146 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:05:17.138156 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:05:17.138168 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:05:17.138179 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:05:17.138189 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:05:17.138200 | orchestrator | 2025-09-11 01:05:17.138212 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-09-11 01:05:17.138223 | orchestrator | Thursday 11 September 2025 01:02:40 +0000 (0:00:03.278) 0:01:13.828 **** 2025-09-11 01:05:17.138233 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-11 01:05:17.138242 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:05:17.138252 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-11 01:05:17.138267 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:05:17.138276 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-11 01:05:17.138285 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:05:17.138379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-11 01:05:17.138402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-11 01:05:17.138412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-11 01:05:17.138422 | orchestrator | 2025-09-11 01:05:17.138432 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-09-11 01:05:17.138441 | orchestrator | Thursday 11 September 2025 01:02:44 +0000 (0:00:04.111) 0:01:17.939 **** 2025-09-11 01:05:17.138450 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:05:17.138465 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:05:17.138475 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:05:17.138482 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:05:17.138490 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:05:17.138498 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:05:17.138506 | orchestrator | 2025-09-11 01:05:17.138514 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-09-11 01:05:17.138521 | orchestrator | Thursday 11 September 2025 01:02:47 +0000 (0:00:02.900) 0:01:20.840 **** 2025-09-11 01:05:17.138529 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:05:17.138537 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:05:17.138545 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:05:17.138552 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:05:17.138560 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:05:17.138568 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:05:17.138576 | orchestrator | 2025-09-11 01:05:17.138583 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-09-11 01:05:17.138591 | orchestrator | Thursday 11 September 2025 01:02:50 +0000 (0:00:03.130) 0:01:23.971 **** 2025-09-11 01:05:17.138599 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:05:17.138607 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:05:17.138615 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:05:17.138622 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:05:17.138630 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:05:17.138638 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:05:17.138646 | orchestrator | 2025-09-11 01:05:17.138654 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-09-11 01:05:17.138662 | orchestrator | Thursday 11 September 2025 01:02:53 +0000 (0:00:02.383) 0:01:26.354 **** 2025-09-11 01:05:17.138669 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:05:17.138677 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:05:17.138685 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:05:17.138693 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:05:17.138700 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:05:17.138708 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:05:17.138716 | orchestrator | 2025-09-11 01:05:17.138724 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-09-11 01:05:17.138731 | orchestrator | Thursday 11 September 2025 01:02:55 +0000 (0:00:02.181) 0:01:28.536 **** 2025-09-11 01:05:17.138739 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:05:17.138747 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:05:17.138755 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:05:17.138763 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:05:17.138775 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:05:17.138784 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:05:17.138791 | orchestrator | 2025-09-11 01:05:17.138799 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-09-11 01:05:17.138807 | orchestrator | Thursday 11 September 2025 01:02:57 +0000 (0:00:01.728) 0:01:30.265 **** 2025-09-11 01:05:17.138815 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:05:17.138823 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:05:17.138830 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:05:17.138838 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:05:17.138846 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:05:17.138853 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:05:17.138861 | orchestrator | 2025-09-11 01:05:17.138869 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-09-11 01:05:17.138877 | orchestrator | Thursday 11 September 2025 01:02:59 +0000 (0:00:02.668) 0:01:32.933 **** 2025-09-11 01:05:17.138889 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-11 01:05:17.138897 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:05:17.138905 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-11 01:05:17.138918 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:05:17.138926 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-11 01:05:17.138933 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:05:17.138941 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-11 01:05:17.138949 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:05:17.138957 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-11 01:05:17.138965 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:05:17.138988 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-11 01:05:17.138996 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:05:17.139004 | orchestrator | 2025-09-11 01:05:17.139012 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-09-11 01:05:17.139020 | orchestrator | Thursday 11 September 2025 01:03:02 +0000 (0:00:02.306) 0:01:35.240 **** 2025-09-11 01:05:17.139028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-11 01:05:17.139037 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:05:17.139045 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-11 01:05:17.139053 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:05:17.139065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-11 01:05:17.139074 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:05:17.139085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-11 01:05:17.139099 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:05:17.139107 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-11 01:05:17.139138 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:05:17.139147 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-11 01:05:17.139155 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:05:17.139163 | orchestrator | 2025-09-11 01:05:17.139171 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-09-11 01:05:17.139179 | orchestrator | Thursday 11 September 2025 01:03:04 +0000 (0:00:01.963) 0:01:37.203 **** 2025-09-11 01:05:17.139187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-11 01:05:17.139195 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:05:17.139209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-11 01:05:17.139223 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:05:17.139236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-11 01:05:17.139244 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:05:17.139252 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-11 01:05:17.139260 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:05:17.139268 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-11 01:05:17.139276 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:05:17.139284 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-11 01:05:17.139297 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:05:17.139305 | orchestrator | 2025-09-11 01:05:17.139313 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-09-11 01:05:17.139321 | orchestrator | Thursday 11 September 2025 01:03:06 +0000 (0:00:01.834) 0:01:39.038 **** 2025-09-11 01:05:17.139329 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:05:17.139340 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:05:17.139348 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:05:17.139356 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:05:17.139363 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:05:17.139371 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:05:17.139379 | orchestrator | 2025-09-11 01:05:17.139387 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-09-11 01:05:17.139395 | orchestrator | Thursday 11 September 2025 01:03:08 +0000 (0:00:02.132) 0:01:41.171 **** 2025-09-11 01:05:17.139403 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:05:17.139410 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:05:17.139418 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:05:17.139426 | orchestrator | changed: [testbed-node-3] 2025-09-11 01:05:17.139433 | orchestrator | changed: [testbed-node-4] 2025-09-11 01:05:17.139441 | orchestrator | changed: [testbed-node-5] 2025-09-11 01:05:17.139449 | orchestrator | 2025-09-11 01:05:17.139456 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-09-11 01:05:17.139468 | orchestrator | Thursday 11 September 2025 01:03:11 +0000 (0:00:03.742) 0:01:44.913 **** 2025-09-11 01:05:17.139476 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:05:17.139484 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:05:17.139492 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:05:17.139499 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:05:17.139507 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:05:17.139515 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:05:17.139523 | orchestrator | 2025-09-11 01:05:17.139530 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-09-11 01:05:17.139538 | orchestrator | Thursday 11 September 2025 01:03:14 +0000 (0:00:02.701) 0:01:47.614 **** 2025-09-11 01:05:17.139546 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:05:17.139554 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:05:17.139561 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:05:17.139569 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:05:17.139577 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:05:17.139585 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:05:17.139592 | orchestrator | 2025-09-11 01:05:17.139600 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-09-11 01:05:17.139608 | orchestrator | Thursday 11 September 2025 01:03:16 +0000 (0:00:02.030) 0:01:49.644 **** 2025-09-11 01:05:17.139616 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:05:17.139623 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:05:17.139631 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:05:17.139639 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:05:17.139646 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:05:17.139654 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:05:17.139662 | orchestrator | 2025-09-11 01:05:17.139670 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-09-11 01:05:17.139677 | orchestrator | Thursday 11 September 2025 01:03:19 +0000 (0:00:03.173) 0:01:52.818 **** 2025-09-11 01:05:17.139685 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:05:17.139693 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:05:17.139701 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:05:17.139708 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:05:17.139716 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:05:17.139724 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:05:17.139731 | orchestrator | 2025-09-11 01:05:17.139744 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-09-11 01:05:17.139752 | orchestrator | Thursday 11 September 2025 01:03:21 +0000 (0:00:02.102) 0:01:54.921 **** 2025-09-11 01:05:17.139760 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:05:17.139768 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:05:17.139775 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:05:17.139783 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:05:17.139791 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:05:17.139799 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:05:17.139806 | orchestrator | 2025-09-11 01:05:17.139814 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-09-11 01:05:17.139822 | orchestrator | Thursday 11 September 2025 01:03:23 +0000 (0:00:02.050) 0:01:56.971 **** 2025-09-11 01:05:17.139830 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:05:17.139837 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:05:17.139845 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:05:17.139853 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:05:17.139860 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:05:17.139868 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:05:17.139876 | orchestrator | 2025-09-11 01:05:17.139884 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-09-11 01:05:17.139891 | orchestrator | Thursday 11 September 2025 01:03:27 +0000 (0:00:03.483) 0:02:00.456 **** 2025-09-11 01:05:17.139899 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:05:17.139907 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:05:17.139915 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:05:17.139922 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:05:17.139930 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:05:17.139938 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:05:17.139945 | orchestrator | 2025-09-11 01:05:17.139953 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-09-11 01:05:17.139961 | orchestrator | Thursday 11 September 2025 01:03:29 +0000 (0:00:02.120) 0:02:02.577 **** 2025-09-11 01:05:17.139969 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-11 01:05:17.139990 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:05:17.139998 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-11 01:05:17.140006 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:05:17.140014 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-11 01:05:17.140021 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:05:17.140029 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-11 01:05:17.140037 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:05:17.140049 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-11 01:05:17.140057 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:05:17.140065 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-11 01:05:17.140073 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:05:17.140081 | orchestrator | 2025-09-11 01:05:17.140089 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-09-11 01:05:17.140097 | orchestrator | Thursday 11 September 2025 01:03:32 +0000 (0:00:02.484) 0:02:05.062 **** 2025-09-11 01:05:17.140109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-11 01:05:17.140122 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:05:17.140130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-11 01:05:17.140139 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:05:17.140147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-11 01:05:17.140155 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:05:17.140163 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-11 01:05:17.140171 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:05:17.140184 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-11 01:05:17.140205 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-11 01:05:17.140213 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:05:17.140221 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:05:17.140229 | orchestrator | 2025-09-11 01:05:17.140237 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-09-11 01:05:17.140245 | orchestrator | Thursday 11 September 2025 01:03:33 +0000 (0:00:01.841) 0:02:06.904 **** 2025-09-11 01:05:17.140253 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-11 01:05:17.140262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-11 01:05:17.140275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-11 01:05:17.140288 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-11 01:05:17.140301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-11 01:05:17.140310 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-11 01:05:17.140318 | orchestrator | 2025-09-11 01:05:17.140326 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-11 01:05:17.140334 | orchestrator | Thursday 11 September 2025 01:03:38 +0000 (0:00:04.256) 0:02:11.160 **** 2025-09-11 01:05:17.140341 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:05:17.140349 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:05:17.140357 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:05:17.140365 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:05:17.140372 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:05:17.140380 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:05:17.140388 | orchestrator | 2025-09-11 01:05:17.140396 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-09-11 01:05:17.140404 | orchestrator | Thursday 11 September 2025 01:03:39 +0000 (0:00:00.889) 0:02:12.050 **** 2025-09-11 01:05:17.140411 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:05:17.140419 | orchestrator | 2025-09-11 01:05:17.140427 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-09-11 01:05:17.140435 | orchestrator | Thursday 11 September 2025 01:03:41 +0000 (0:00:02.140) 0:02:14.191 **** 2025-09-11 01:05:17.140443 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:05:17.140450 | orchestrator | 2025-09-11 01:05:17.140458 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-09-11 01:05:17.140466 | orchestrator | Thursday 11 September 2025 01:03:43 +0000 (0:00:02.404) 0:02:16.595 **** 2025-09-11 01:05:17.140474 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:05:17.140482 | orchestrator | 2025-09-11 01:05:17.140489 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-11 01:05:17.140497 | orchestrator | Thursday 11 September 2025 01:04:25 +0000 (0:00:41.877) 0:02:58.473 **** 2025-09-11 01:05:17.140505 | orchestrator | 2025-09-11 01:05:17.140513 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-11 01:05:17.140525 | orchestrator | Thursday 11 September 2025 01:04:25 +0000 (0:00:00.153) 0:02:58.626 **** 2025-09-11 01:05:17.140533 | orchestrator | 2025-09-11 01:05:17.140541 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-11 01:05:17.140549 | orchestrator | Thursday 11 September 2025 01:04:25 +0000 (0:00:00.263) 0:02:58.889 **** 2025-09-11 01:05:17.140557 | orchestrator | 2025-09-11 01:05:17.140564 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-11 01:05:17.140572 | orchestrator | Thursday 11 September 2025 01:04:25 +0000 (0:00:00.063) 0:02:58.952 **** 2025-09-11 01:05:17.140580 | orchestrator | 2025-09-11 01:05:17.140592 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-11 01:05:17.140600 | orchestrator | Thursday 11 September 2025 01:04:26 +0000 (0:00:00.086) 0:02:59.039 **** 2025-09-11 01:05:17.140608 | orchestrator | 2025-09-11 01:05:17.140619 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-11 01:05:17.140632 | orchestrator | Thursday 11 September 2025 01:04:26 +0000 (0:00:00.082) 0:02:59.121 **** 2025-09-11 01:05:17.140645 | orchestrator | 2025-09-11 01:05:17.140658 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-09-11 01:05:17.140672 | orchestrator | Thursday 11 September 2025 01:04:26 +0000 (0:00:00.090) 0:02:59.212 **** 2025-09-11 01:05:17.140684 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:05:17.140697 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:05:17.140709 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:05:17.140722 | orchestrator | 2025-09-11 01:05:17.140734 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-09-11 01:05:17.140753 | orchestrator | Thursday 11 September 2025 01:04:51 +0000 (0:00:25.010) 0:03:24.223 **** 2025-09-11 01:05:17.140767 | orchestrator | changed: [testbed-node-5] 2025-09-11 01:05:17.140780 | orchestrator | changed: [testbed-node-3] 2025-09-11 01:05:17.140793 | orchestrator | changed: [testbed-node-4] 2025-09-11 01:05:17.140806 | orchestrator | 2025-09-11 01:05:17.140820 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 01:05:17.140829 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-11 01:05:17.140838 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-11 01:05:17.140846 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-11 01:05:17.140853 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-11 01:05:17.140861 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-11 01:05:17.140869 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-11 01:05:17.140877 | orchestrator | 2025-09-11 01:05:17.140885 | orchestrator | 2025-09-11 01:05:17.140893 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 01:05:17.140901 | orchestrator | Thursday 11 September 2025 01:05:15 +0000 (0:00:24.559) 0:03:48.782 **** 2025-09-11 01:05:17.140908 | orchestrator | =============================================================================== 2025-09-11 01:05:17.140916 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 41.88s 2025-09-11 01:05:17.140924 | orchestrator | neutron : Restart neutron-server container ----------------------------- 25.01s 2025-09-11 01:05:17.140932 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 24.56s 2025-09-11 01:05:17.140939 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.75s 2025-09-11 01:05:17.140953 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.72s 2025-09-11 01:05:17.140961 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.38s 2025-09-11 01:05:17.140969 | orchestrator | neutron : Check neutron containers -------------------------------------- 4.26s 2025-09-11 01:05:17.141022 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.23s 2025-09-11 01:05:17.141031 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.11s 2025-09-11 01:05:17.141039 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.89s 2025-09-11 01:05:17.141046 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.74s 2025-09-11 01:05:17.141054 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.57s 2025-09-11 01:05:17.141062 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.50s 2025-09-11 01:05:17.141070 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.49s 2025-09-11 01:05:17.141077 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 3.48s 2025-09-11 01:05:17.141085 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.35s 2025-09-11 01:05:17.141093 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.28s 2025-09-11 01:05:17.141101 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 3.17s 2025-09-11 01:05:17.141108 | orchestrator | neutron : Copying over openvswitch_agent.ini ---------------------------- 3.13s 2025-09-11 01:05:17.141116 | orchestrator | neutron : Copying over linuxbridge_agent.ini ---------------------------- 2.90s 2025-09-11 01:05:17.141124 | orchestrator | 2025-09-11 01:05:17 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:05:17.141132 | orchestrator | 2025-09-11 01:05:17 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:05:17.141140 | orchestrator | 2025-09-11 01:05:17 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:05:20.178818 | orchestrator | 2025-09-11 01:05:20 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state STARTED 2025-09-11 01:05:20.189051 | orchestrator | 2025-09-11 01:05:20 | INFO  | Task 870e330f-972d-46ee-87e4-5a171dfd9161 is in state STARTED 2025-09-11 01:05:20.190739 | orchestrator | 2025-09-11 01:05:20 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:05:20.191466 | orchestrator | 2025-09-11 01:05:20 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:05:20.191494 | orchestrator | 2025-09-11 01:05:20 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:05:23.232456 | orchestrator | 2025-09-11 01:05:23 | INFO  | Task fd191646-bb08-4315-b263-8ab66c83dfe1 is in state SUCCESS 2025-09-11 01:05:23.235604 | orchestrator | 2025-09-11 01:05:23.235651 | orchestrator | 2025-09-11 01:05:23.235664 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-11 01:05:23.235676 | orchestrator | 2025-09-11 01:05:23.235687 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-11 01:05:23.235698 | orchestrator | Thursday 11 September 2025 01:02:25 +0000 (0:00:00.427) 0:00:00.427 **** 2025-09-11 01:05:23.235709 | orchestrator | ok: [testbed-node-0] 2025-09-11 01:05:23.235721 | orchestrator | ok: [testbed-node-1] 2025-09-11 01:05:23.235731 | orchestrator | ok: [testbed-node-2] 2025-09-11 01:05:23.235769 | orchestrator | 2025-09-11 01:05:23.235781 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-11 01:05:23.235792 | orchestrator | Thursday 11 September 2025 01:02:26 +0000 (0:00:00.222) 0:00:00.650 **** 2025-09-11 01:05:23.235803 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-09-11 01:05:23.235815 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-09-11 01:05:23.235848 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-09-11 01:05:23.235859 | orchestrator | 2025-09-11 01:05:23.235870 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-09-11 01:05:23.235881 | orchestrator | 2025-09-11 01:05:23.235891 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-11 01:05:23.235902 | orchestrator | Thursday 11 September 2025 01:02:26 +0000 (0:00:00.347) 0:00:00.997 **** 2025-09-11 01:05:23.235913 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 01:05:23.235924 | orchestrator | 2025-09-11 01:05:23.235935 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-09-11 01:05:23.235946 | orchestrator | Thursday 11 September 2025 01:02:27 +0000 (0:00:00.917) 0:00:01.914 **** 2025-09-11 01:05:23.235956 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-09-11 01:05:23.236118 | orchestrator | 2025-09-11 01:05:23.236132 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-09-11 01:05:23.236143 | orchestrator | Thursday 11 September 2025 01:02:31 +0000 (0:00:04.265) 0:00:06.180 **** 2025-09-11 01:05:23.236159 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-09-11 01:05:23.236179 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-09-11 01:05:23.236198 | orchestrator | 2025-09-11 01:05:23.236228 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-09-11 01:05:23.236241 | orchestrator | Thursday 11 September 2025 01:02:38 +0000 (0:00:06.460) 0:00:12.640 **** 2025-09-11 01:05:23.236259 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-11 01:05:23.236282 | orchestrator | 2025-09-11 01:05:23.236305 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-09-11 01:05:23.236348 | orchestrator | Thursday 11 September 2025 01:02:41 +0000 (0:00:03.829) 0:00:16.469 **** 2025-09-11 01:05:23.236368 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-11 01:05:23.236388 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-09-11 01:05:23.236407 | orchestrator | 2025-09-11 01:05:23.236427 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-09-11 01:05:23.236447 | orchestrator | Thursday 11 September 2025 01:02:46 +0000 (0:00:04.342) 0:00:20.812 **** 2025-09-11 01:05:23.236466 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-11 01:05:23.236486 | orchestrator | 2025-09-11 01:05:23.236506 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-09-11 01:05:23.236525 | orchestrator | Thursday 11 September 2025 01:02:49 +0000 (0:00:03.705) 0:00:24.517 **** 2025-09-11 01:05:23.236544 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-09-11 01:05:23.236562 | orchestrator | 2025-09-11 01:05:23.236582 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-09-11 01:05:23.236603 | orchestrator | Thursday 11 September 2025 01:02:54 +0000 (0:00:04.643) 0:00:29.161 **** 2025-09-11 01:05:23.236627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-11 01:05:23.236702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-11 01:05:23.236727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-11 01:05:23.236749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-11 01:05:23.236770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-11 01:05:23.236791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-11 01:05:23.236812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.237087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.237111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.237123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.237136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.237155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.237170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.237200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.237238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.237262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.237283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.237304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.237322 | orchestrator | 2025-09-11 01:05:23.237342 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-09-11 01:05:23.237360 | orchestrator | Thursday 11 September 2025 01:02:57 +0000 (0:00:03.331) 0:00:32.492 **** 2025-09-11 01:05:23.237379 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:05:23.237399 | orchestrator | 2025-09-11 01:05:23.237417 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-09-11 01:05:23.237437 | orchestrator | Thursday 11 September 2025 01:02:58 +0000 (0:00:00.254) 0:00:32.747 **** 2025-09-11 01:05:23.237456 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:05:23.237475 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:05:23.237493 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:05:23.237512 | orchestrator | 2025-09-11 01:05:23.237531 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-11 01:05:23.237549 | orchestrator | Thursday 11 September 2025 01:02:58 +0000 (0:00:00.733) 0:00:33.481 **** 2025-09-11 01:05:23.237578 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 01:05:23.237599 | orchestrator | 2025-09-11 01:05:23.237619 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-09-11 01:05:23.237638 | orchestrator | Thursday 11 September 2025 01:02:59 +0000 (0:00:00.834) 0:00:34.315 **** 2025-09-11 01:05:23.237656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-11 01:05:23.237695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-11 01:05:23.237718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-11 01:05:23.237738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-11 01:05:23.237759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-11 01:05:23.237791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-11 01:05:23.237828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.237860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.237881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.237903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.237924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.237956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.238004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.238134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.238158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.238178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.238198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.238218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.238251 | orchestrator | 2025-09-11 01:05:23.238272 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-09-11 01:05:23.238290 | orchestrator | Thursday 11 September 2025 01:03:06 +0000 (0:00:07.035) 0:00:41.351 **** 2025-09-11 01:05:23.238310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-11 01:05:23.238368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-11 01:05:23.238396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.238409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.238420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.238431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.238454 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:05:23.238466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-11 01:05:23.238514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-11 01:05:23.238547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.238686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.238710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.238731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.238765 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:05:23.238786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-11 01:05:23.238808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-11 01:05:23.238850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.238873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.238894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.238930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.238953 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:05:23.239034 | orchestrator | 2025-09-11 01:05:23.239056 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-09-11 01:05:23.239077 | orchestrator | Thursday 11 September 2025 01:03:08 +0000 (0:00:01.422) 0:00:42.773 **** 2025-09-11 01:05:23.239096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-11 01:05:23.239125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-11 01:05:23.239166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.239186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.239206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.239241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.239262 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:05:23.239283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-11 01:05:23.239304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-11 01:05:23.239339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.239358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.239379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.239409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.239428 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:05:23.239448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-11 01:05:23.239468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-11 01:05:23.239488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.239523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.239543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.239580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.239599 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:05:23.239618 | orchestrator | 2025-09-11 01:05:23.239638 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-09-11 01:05:23.239659 | orchestrator | Thursday 11 September 2025 01:03:10 +0000 (0:00:01.844) 0:00:44.618 **** 2025-09-11 01:05:23.239680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-11 01:05:23.239702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-11 01:05:23.239742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-11 01:05:23.239764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-11 01:05:23.239795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-11 01:05:23.239816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-11 01:05:23.239836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.239856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.239892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.239915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.239948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.240043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.240068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.240089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.240107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.240144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.240164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.240194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.240212 | orchestrator | 2025-09-11 01:05:23.240230 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-09-11 01:05:23.240249 | orchestrator | Thursday 11 September 2025 01:03:16 +0000 (0:00:06.417) 0:00:51.035 **** 2025-09-11 01:05:23.240268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-11 01:05:23.240289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-11 01:05:23.240304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-11 01:05:23.241212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-11 01:05:23.241251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-11 01:05:23.241261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-11 01:05:23.241272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.241282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.241292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.241314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.241332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.241342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.241352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.241362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.241372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.241382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.241412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.241423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.241433 | orchestrator | 2025-09-11 01:05:23.241443 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-09-11 01:05:23.241452 | orchestrator | Thursday 11 September 2025 01:03:34 +0000 (0:00:17.598) 0:01:08.633 **** 2025-09-11 01:05:23.241462 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-11 01:05:23.241472 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-11 01:05:23.241481 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-11 01:05:23.241490 | orchestrator | 2025-09-11 01:05:23.241500 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-09-11 01:05:23.241510 | orchestrator | Thursday 11 September 2025 01:03:40 +0000 (0:00:06.344) 0:01:14.978 **** 2025-09-11 01:05:23.241519 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-11 01:05:23.241529 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-11 01:05:23.241538 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-11 01:05:23.241548 | orchestrator | 2025-09-11 01:05:23.241558 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-09-11 01:05:23.241567 | orchestrator | Thursday 11 September 2025 01:03:43 +0000 (0:00:02.601) 0:01:17.580 **** 2025-09-11 01:05:23.241577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-11 01:05:23.241588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-11 01:05:23.241613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-11 01:05:23.241624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-11 01:05:23.241634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.241644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.241654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.241664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-11 01:05:23.241680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.241699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.241710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.241720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-11 01:05:23.241730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.241740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.241756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.241766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.241785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.241798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.241809 | orchestrator | 2025-09-11 01:05:23.241822 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-09-11 01:05:23.241838 | orchestrator | Thursday 11 September 2025 01:03:45 +0000 (0:00:02.775) 0:01:20.355 **** 2025-09-11 01:05:23.241850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-11 01:05:23.241866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-11 01:05:23.241892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-11 01:05:23.241921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-11 01:05:23.241934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.241951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.241990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.242002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-11 01:05:23.242074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.242104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.242122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.242139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-11 01:05:23.242157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.242175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.242198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.242216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.242248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.242268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.242286 | orchestrator | 2025-09-11 01:05:23.242304 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-11 01:05:23.242322 | orchestrator | Thursday 11 September 2025 01:03:48 +0000 (0:00:02.709) 0:01:23.064 **** 2025-09-11 01:05:23.242334 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:05:23.242344 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:05:23.242353 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:05:23.242363 | orchestrator | 2025-09-11 01:05:23.242372 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-09-11 01:05:23.242382 | orchestrator | Thursday 11 September 2025 01:03:48 +0000 (0:00:00.206) 0:01:23.271 **** 2025-09-11 01:05:23.242391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-11 01:05:23.242408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-11 01:05:23.242418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.242428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.242458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.242470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.242480 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:05:23.242489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-11 01:05:23.242505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-11 01:05:23.242515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.242525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.242544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.242555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.242565 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:05:23.242575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-11 01:05:23.242590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-11 01:05:23.242600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.242612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.242634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.242651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-11 01:05:23.242661 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:05:23.242670 | orchestrator | 2025-09-11 01:05:23.242680 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-09-11 01:05:23.242690 | orchestrator | Thursday 11 September 2025 01:03:49 +0000 (0:00:00.951) 0:01:24.222 **** 2025-09-11 01:05:23.242700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-11 01:05:23.242715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-11 01:05:23.242726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-11 01:05:23.242736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-11 01:05:23.242754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-11 01:05:23.242765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-11 01:05:23.242780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.242790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.242800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.242810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.242828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.242838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.242848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.242864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.242874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.242884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.242894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.242913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:05:23.242923 | orchestrator | 2025-09-11 01:05:23.242933 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-11 01:05:23.242942 | orchestrator | Thursday 11 September 2025 01:03:54 +0000 (0:00:04.599) 0:01:28.822 **** 2025-09-11 01:05:23.242952 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:05:23.242962 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:05:23.242998 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:05:23.243008 | orchestrator | 2025-09-11 01:05:23.243017 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-09-11 01:05:23.243027 | orchestrator | Thursday 11 September 2025 01:03:54 +0000 (0:00:00.253) 0:01:29.076 **** 2025-09-11 01:05:23.243036 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-09-11 01:05:23.243046 | orchestrator | 2025-09-11 01:05:23.243055 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-09-11 01:05:23.243065 | orchestrator | Thursday 11 September 2025 01:03:56 +0000 (0:00:02.125) 0:01:31.201 **** 2025-09-11 01:05:23.243074 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-11 01:05:23.243084 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-09-11 01:05:23.243093 | orchestrator | 2025-09-11 01:05:23.243102 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-09-11 01:05:23.243112 | orchestrator | Thursday 11 September 2025 01:03:58 +0000 (0:00:02.316) 0:01:33.517 **** 2025-09-11 01:05:23.243121 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:05:23.243130 | orchestrator | 2025-09-11 01:05:23.243140 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-11 01:05:23.243149 | orchestrator | Thursday 11 September 2025 01:04:16 +0000 (0:00:17.745) 0:01:51.262 **** 2025-09-11 01:05:23.243158 | orchestrator | 2025-09-11 01:05:23.243168 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-11 01:05:23.243177 | orchestrator | Thursday 11 September 2025 01:04:16 +0000 (0:00:00.219) 0:01:51.482 **** 2025-09-11 01:05:23.243187 | orchestrator | 2025-09-11 01:05:23.243196 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-11 01:05:23.243205 | orchestrator | Thursday 11 September 2025 01:04:16 +0000 (0:00:00.066) 0:01:51.548 **** 2025-09-11 01:05:23.243215 | orchestrator | 2025-09-11 01:05:23.243224 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-09-11 01:05:23.243233 | orchestrator | Thursday 11 September 2025 01:04:17 +0000 (0:00:00.064) 0:01:51.613 **** 2025-09-11 01:05:23.243243 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:05:23.243252 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:05:23.243261 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:05:23.243271 | orchestrator | 2025-09-11 01:05:23.243280 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-09-11 01:05:23.243290 | orchestrator | Thursday 11 September 2025 01:04:30 +0000 (0:00:13.140) 0:02:04.753 **** 2025-09-11 01:05:23.243299 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:05:23.243308 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:05:23.243318 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:05:23.243327 | orchestrator | 2025-09-11 01:05:23.243336 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-09-11 01:05:23.243346 | orchestrator | Thursday 11 September 2025 01:04:37 +0000 (0:00:07.169) 0:02:11.923 **** 2025-09-11 01:05:23.243355 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:05:23.243365 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:05:23.243374 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:05:23.243383 | orchestrator | 2025-09-11 01:05:23.243393 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-09-11 01:05:23.243402 | orchestrator | Thursday 11 September 2025 01:04:47 +0000 (0:00:10.568) 0:02:22.492 **** 2025-09-11 01:05:23.243411 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:05:23.243421 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:05:23.243430 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:05:23.243439 | orchestrator | 2025-09-11 01:05:23.243449 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-09-11 01:05:23.243458 | orchestrator | Thursday 11 September 2025 01:04:59 +0000 (0:00:11.379) 0:02:33.872 **** 2025-09-11 01:05:23.243467 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:05:23.243477 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:05:23.243486 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:05:23.243500 | orchestrator | 2025-09-11 01:05:23.243510 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-09-11 01:05:23.243519 | orchestrator | Thursday 11 September 2025 01:05:04 +0000 (0:00:05.293) 0:02:39.165 **** 2025-09-11 01:05:23.243529 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:05:23.243538 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:05:23.243547 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:05:23.243562 | orchestrator | 2025-09-11 01:05:23.243578 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-09-11 01:05:23.243601 | orchestrator | Thursday 11 September 2025 01:05:15 +0000 (0:00:10.827) 0:02:49.993 **** 2025-09-11 01:05:23.243622 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:05:23.243637 | orchestrator | 2025-09-11 01:05:23.243653 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 01:05:23.243669 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-11 01:05:23.243684 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-11 01:05:23.243700 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-11 01:05:23.243716 | orchestrator | 2025-09-11 01:05:23.243732 | orchestrator | 2025-09-11 01:05:23.243766 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 01:05:23.243783 | orchestrator | Thursday 11 September 2025 01:05:22 +0000 (0:00:07.384) 0:02:57.377 **** 2025-09-11 01:05:23.243794 | orchestrator | =============================================================================== 2025-09-11 01:05:23.243803 | orchestrator | designate : Running Designate bootstrap container ---------------------- 17.75s 2025-09-11 01:05:23.243812 | orchestrator | designate : Copying over designate.conf -------------------------------- 17.60s 2025-09-11 01:05:23.243823 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.14s 2025-09-11 01:05:23.243839 | orchestrator | designate : Restart designate-producer container ----------------------- 11.38s 2025-09-11 01:05:23.243855 | orchestrator | designate : Restart designate-worker container ------------------------- 10.83s 2025-09-11 01:05:23.243872 | orchestrator | designate : Restart designate-central container ------------------------ 10.57s 2025-09-11 01:05:23.243889 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.38s 2025-09-11 01:05:23.243908 | orchestrator | designate : Restart designate-api container ----------------------------- 7.17s 2025-09-11 01:05:23.243925 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 7.04s 2025-09-11 01:05:23.243943 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.46s 2025-09-11 01:05:23.243960 | orchestrator | designate : Copying over config.json files for services ----------------- 6.42s 2025-09-11 01:05:23.244043 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 6.34s 2025-09-11 01:05:23.244061 | orchestrator | designate : Restart designate-mdns container ---------------------------- 5.29s 2025-09-11 01:05:23.244079 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.64s 2025-09-11 01:05:23.244097 | orchestrator | designate : Check designate containers ---------------------------------- 4.60s 2025-09-11 01:05:23.244115 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.34s 2025-09-11 01:05:23.244133 | orchestrator | service-ks-register : designate | Creating services --------------------- 4.27s 2025-09-11 01:05:23.244150 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.83s 2025-09-11 01:05:23.244168 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.71s 2025-09-11 01:05:23.244186 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.33s 2025-09-11 01:05:23.244218 | orchestrator | 2025-09-11 01:05:23 | INFO  | Task 870e330f-972d-46ee-87e4-5a171dfd9161 is in state STARTED 2025-09-11 01:05:23.244236 | orchestrator | 2025-09-11 01:05:23 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:05:23.244253 | orchestrator | 2025-09-11 01:05:23 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:05:23.244271 | orchestrator | 2025-09-11 01:05:23 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:05:26.278986 | orchestrator | 2025-09-11 01:05:26 | INFO  | Task 870e330f-972d-46ee-87e4-5a171dfd9161 is in state STARTED 2025-09-11 01:05:26.281848 | orchestrator | 2025-09-11 01:05:26 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:05:26.283942 | orchestrator | 2025-09-11 01:05:26 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:05:26.286738 | orchestrator | 2025-09-11 01:05:26 | INFO  | Task 157caa23-7299-412d-b681-018fc73dfcda is in state STARTED 2025-09-11 01:05:26.286836 | orchestrator | 2025-09-11 01:05:26 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:05:29.331306 | orchestrator | 2025-09-11 01:05:29 | INFO  | Task 870e330f-972d-46ee-87e4-5a171dfd9161 is in state STARTED 2025-09-11 01:05:29.332533 | orchestrator | 2025-09-11 01:05:29 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:05:29.334134 | orchestrator | 2025-09-11 01:05:29 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:05:29.336200 | orchestrator | 2025-09-11 01:05:29 | INFO  | Task 157caa23-7299-412d-b681-018fc73dfcda is in state STARTED 2025-09-11 01:05:29.336222 | orchestrator | 2025-09-11 01:05:29 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:05:32.381133 | orchestrator | 2025-09-11 01:05:32 | INFO  | Task 870e330f-972d-46ee-87e4-5a171dfd9161 is in state STARTED 2025-09-11 01:05:32.382413 | orchestrator | 2025-09-11 01:05:32 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:05:32.383759 | orchestrator | 2025-09-11 01:05:32 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:05:32.385418 | orchestrator | 2025-09-11 01:05:32 | INFO  | Task 157caa23-7299-412d-b681-018fc73dfcda is in state STARTED 2025-09-11 01:05:32.385449 | orchestrator | 2025-09-11 01:05:32 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:05:35.425680 | orchestrator | 2025-09-11 01:05:35 | INFO  | Task 870e330f-972d-46ee-87e4-5a171dfd9161 is in state STARTED 2025-09-11 01:05:35.428435 | orchestrator | 2025-09-11 01:05:35 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:05:35.430774 | orchestrator | 2025-09-11 01:05:35 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:05:35.432092 | orchestrator | 2025-09-11 01:05:35 | INFO  | Task 157caa23-7299-412d-b681-018fc73dfcda is in state STARTED 2025-09-11 01:05:35.432111 | orchestrator | 2025-09-11 01:05:35 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:05:38.488358 | orchestrator | 2025-09-11 01:05:38 | INFO  | Task 870e330f-972d-46ee-87e4-5a171dfd9161 is in state STARTED 2025-09-11 01:05:38.488461 | orchestrator | 2025-09-11 01:05:38 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:05:38.488476 | orchestrator | 2025-09-11 01:05:38 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:05:38.489140 | orchestrator | 2025-09-11 01:05:38 | INFO  | Task 157caa23-7299-412d-b681-018fc73dfcda is in state STARTED 2025-09-11 01:05:38.492138 | orchestrator | 2025-09-11 01:05:38 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:05:41.529813 | orchestrator | 2025-09-11 01:05:41 | INFO  | Task 870e330f-972d-46ee-87e4-5a171dfd9161 is in state STARTED 2025-09-11 01:05:41.532842 | orchestrator | 2025-09-11 01:05:41 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:05:41.535208 | orchestrator | 2025-09-11 01:05:41 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:05:41.537496 | orchestrator | 2025-09-11 01:05:41 | INFO  | Task 157caa23-7299-412d-b681-018fc73dfcda is in state STARTED 2025-09-11 01:05:41.537734 | orchestrator | 2025-09-11 01:05:41 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:05:44.579509 | orchestrator | 2025-09-11 01:05:44 | INFO  | Task 870e330f-972d-46ee-87e4-5a171dfd9161 is in state STARTED 2025-09-11 01:05:44.582339 | orchestrator | 2025-09-11 01:05:44 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:05:44.585335 | orchestrator | 2025-09-11 01:05:44 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:05:44.588547 | orchestrator | 2025-09-11 01:05:44 | INFO  | Task 157caa23-7299-412d-b681-018fc73dfcda is in state STARTED 2025-09-11 01:05:44.588981 | orchestrator | 2025-09-11 01:05:44 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:05:47.623463 | orchestrator | 2025-09-11 01:05:47 | INFO  | Task 870e330f-972d-46ee-87e4-5a171dfd9161 is in state STARTED 2025-09-11 01:05:47.624543 | orchestrator | 2025-09-11 01:05:47 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:05:47.625186 | orchestrator | 2025-09-11 01:05:47 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:05:47.626384 | orchestrator | 2025-09-11 01:05:47 | INFO  | Task 157caa23-7299-412d-b681-018fc73dfcda is in state STARTED 2025-09-11 01:05:47.626411 | orchestrator | 2025-09-11 01:05:47 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:05:50.671864 | orchestrator | 2025-09-11 01:05:50 | INFO  | Task 870e330f-972d-46ee-87e4-5a171dfd9161 is in state STARTED 2025-09-11 01:05:50.673795 | orchestrator | 2025-09-11 01:05:50 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:05:50.675686 | orchestrator | 2025-09-11 01:05:50 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:05:50.677635 | orchestrator | 2025-09-11 01:05:50 | INFO  | Task 157caa23-7299-412d-b681-018fc73dfcda is in state STARTED 2025-09-11 01:05:50.677699 | orchestrator | 2025-09-11 01:05:50 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:05:53.714891 | orchestrator | 2025-09-11 01:05:53 | INFO  | Task 870e330f-972d-46ee-87e4-5a171dfd9161 is in state STARTED 2025-09-11 01:05:53.716358 | orchestrator | 2025-09-11 01:05:53 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:05:53.717682 | orchestrator | 2025-09-11 01:05:53 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:05:53.719147 | orchestrator | 2025-09-11 01:05:53 | INFO  | Task 157caa23-7299-412d-b681-018fc73dfcda is in state STARTED 2025-09-11 01:05:53.719175 | orchestrator | 2025-09-11 01:05:53 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:05:56.759062 | orchestrator | 2025-09-11 01:05:56 | INFO  | Task 870e330f-972d-46ee-87e4-5a171dfd9161 is in state STARTED 2025-09-11 01:05:56.760477 | orchestrator | 2025-09-11 01:05:56 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:05:56.762834 | orchestrator | 2025-09-11 01:05:56 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:05:56.764502 | orchestrator | 2025-09-11 01:05:56 | INFO  | Task 157caa23-7299-412d-b681-018fc73dfcda is in state STARTED 2025-09-11 01:05:56.764650 | orchestrator | 2025-09-11 01:05:56 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:05:59.802149 | orchestrator | 2025-09-11 01:05:59 | INFO  | Task 870e330f-972d-46ee-87e4-5a171dfd9161 is in state STARTED 2025-09-11 01:05:59.804366 | orchestrator | 2025-09-11 01:05:59 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:05:59.806572 | orchestrator | 2025-09-11 01:05:59 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:05:59.808622 | orchestrator | 2025-09-11 01:05:59 | INFO  | Task 157caa23-7299-412d-b681-018fc73dfcda is in state STARTED 2025-09-11 01:05:59.808647 | orchestrator | 2025-09-11 01:05:59 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:06:02.846746 | orchestrator | 2025-09-11 01:06:02 | INFO  | Task 870e330f-972d-46ee-87e4-5a171dfd9161 is in state STARTED 2025-09-11 01:06:02.848329 | orchestrator | 2025-09-11 01:06:02 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:06:02.849617 | orchestrator | 2025-09-11 01:06:02 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:06:02.851452 | orchestrator | 2025-09-11 01:06:02 | INFO  | Task 157caa23-7299-412d-b681-018fc73dfcda is in state STARTED 2025-09-11 01:06:02.851474 | orchestrator | 2025-09-11 01:06:02 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:06:05.893361 | orchestrator | 2025-09-11 01:06:05 | INFO  | Task 870e330f-972d-46ee-87e4-5a171dfd9161 is in state STARTED 2025-09-11 01:06:05.895520 | orchestrator | 2025-09-11 01:06:05 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:06:05.897280 | orchestrator | 2025-09-11 01:06:05 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:06:05.899088 | orchestrator | 2025-09-11 01:06:05 | INFO  | Task 157caa23-7299-412d-b681-018fc73dfcda is in state STARTED 2025-09-11 01:06:05.899128 | orchestrator | 2025-09-11 01:06:05 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:06:08.939075 | orchestrator | 2025-09-11 01:06:08 | INFO  | Task 870e330f-972d-46ee-87e4-5a171dfd9161 is in state STARTED 2025-09-11 01:06:08.941631 | orchestrator | 2025-09-11 01:06:08 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:06:08.943986 | orchestrator | 2025-09-11 01:06:08 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:06:08.946673 | orchestrator | 2025-09-11 01:06:08 | INFO  | Task 157caa23-7299-412d-b681-018fc73dfcda is in state STARTED 2025-09-11 01:06:08.947007 | orchestrator | 2025-09-11 01:06:08 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:06:11.984519 | orchestrator | 2025-09-11 01:06:11 | INFO  | Task 870e330f-972d-46ee-87e4-5a171dfd9161 is in state STARTED 2025-09-11 01:06:11.986383 | orchestrator | 2025-09-11 01:06:11 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:06:11.987623 | orchestrator | 2025-09-11 01:06:11 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:06:11.988993 | orchestrator | 2025-09-11 01:06:11 | INFO  | Task 157caa23-7299-412d-b681-018fc73dfcda is in state STARTED 2025-09-11 01:06:11.989019 | orchestrator | 2025-09-11 01:06:11 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:06:15.026126 | orchestrator | 2025-09-11 01:06:15 | INFO  | Task 870e330f-972d-46ee-87e4-5a171dfd9161 is in state STARTED 2025-09-11 01:06:15.026228 | orchestrator | 2025-09-11 01:06:15 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:06:15.028598 | orchestrator | 2025-09-11 01:06:15 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:06:15.028626 | orchestrator | 2025-09-11 01:06:15 | INFO  | Task 157caa23-7299-412d-b681-018fc73dfcda is in state STARTED 2025-09-11 01:06:15.028780 | orchestrator | 2025-09-11 01:06:15 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:06:18.114744 | orchestrator | 2025-09-11 01:06:18 | INFO  | Task 870e330f-972d-46ee-87e4-5a171dfd9161 is in state STARTED 2025-09-11 01:06:18.117519 | orchestrator | 2025-09-11 01:06:18 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:06:18.118814 | orchestrator | 2025-09-11 01:06:18 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:06:18.119732 | orchestrator | 2025-09-11 01:06:18 | INFO  | Task 157caa23-7299-412d-b681-018fc73dfcda is in state STARTED 2025-09-11 01:06:18.119773 | orchestrator | 2025-09-11 01:06:18 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:06:21.151311 | orchestrator | 2025-09-11 01:06:21 | INFO  | Task 870e330f-972d-46ee-87e4-5a171dfd9161 is in state STARTED 2025-09-11 01:06:21.151730 | orchestrator | 2025-09-11 01:06:21 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:06:21.152585 | orchestrator | 2025-09-11 01:06:21 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:06:21.153557 | orchestrator | 2025-09-11 01:06:21 | INFO  | Task 157caa23-7299-412d-b681-018fc73dfcda is in state STARTED 2025-09-11 01:06:21.153592 | orchestrator | 2025-09-11 01:06:21 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:06:24.190340 | orchestrator | 2025-09-11 01:06:24 | INFO  | Task 870e330f-972d-46ee-87e4-5a171dfd9161 is in state STARTED 2025-09-11 01:06:24.191039 | orchestrator | 2025-09-11 01:06:24 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:06:24.191570 | orchestrator | 2025-09-11 01:06:24 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:06:24.192425 | orchestrator | 2025-09-11 01:06:24 | INFO  | Task 157caa23-7299-412d-b681-018fc73dfcda is in state STARTED 2025-09-11 01:06:24.192535 | orchestrator | 2025-09-11 01:06:24 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:06:27.230479 | orchestrator | 2025-09-11 01:06:27 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:06:27.232671 | orchestrator | 2025-09-11 01:06:27 | INFO  | Task 870e330f-972d-46ee-87e4-5a171dfd9161 is in state SUCCESS 2025-09-11 01:06:27.234200 | orchestrator | 2025-09-11 01:06:27.234234 | orchestrator | 2025-09-11 01:06:27.234247 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-11 01:06:27.234258 | orchestrator | 2025-09-11 01:06:27.234270 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-11 01:06:27.234281 | orchestrator | Thursday 11 September 2025 01:05:20 +0000 (0:00:00.299) 0:00:00.299 **** 2025-09-11 01:06:27.234292 | orchestrator | ok: [testbed-node-0] 2025-09-11 01:06:27.234304 | orchestrator | ok: [testbed-node-1] 2025-09-11 01:06:27.234315 | orchestrator | ok: [testbed-node-2] 2025-09-11 01:06:27.234326 | orchestrator | 2025-09-11 01:06:27.234336 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-11 01:06:27.234386 | orchestrator | Thursday 11 September 2025 01:05:20 +0000 (0:00:00.282) 0:00:00.582 **** 2025-09-11 01:06:27.234398 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-09-11 01:06:27.234410 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-09-11 01:06:27.234421 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-09-11 01:06:27.234456 | orchestrator | 2025-09-11 01:06:27.234468 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-09-11 01:06:27.234479 | orchestrator | 2025-09-11 01:06:27.234489 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-11 01:06:27.234500 | orchestrator | Thursday 11 September 2025 01:05:20 +0000 (0:00:00.366) 0:00:00.948 **** 2025-09-11 01:06:27.234511 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 01:06:27.234523 | orchestrator | 2025-09-11 01:06:27.234534 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-09-11 01:06:27.234544 | orchestrator | Thursday 11 September 2025 01:05:21 +0000 (0:00:00.521) 0:00:01.469 **** 2025-09-11 01:06:27.234555 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-09-11 01:06:27.234566 | orchestrator | 2025-09-11 01:06:27.234577 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-09-11 01:06:27.234588 | orchestrator | Thursday 11 September 2025 01:05:24 +0000 (0:00:03.618) 0:00:05.087 **** 2025-09-11 01:06:27.234598 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-09-11 01:06:27.234610 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-09-11 01:06:27.234620 | orchestrator | 2025-09-11 01:06:27.234631 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-09-11 01:06:27.234642 | orchestrator | Thursday 11 September 2025 01:05:32 +0000 (0:00:07.331) 0:00:12.419 **** 2025-09-11 01:06:27.234653 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-11 01:06:27.234664 | orchestrator | 2025-09-11 01:06:27.234675 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-09-11 01:06:27.234686 | orchestrator | Thursday 11 September 2025 01:05:35 +0000 (0:00:03.475) 0:00:15.895 **** 2025-09-11 01:06:27.234697 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-11 01:06:27.234708 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-09-11 01:06:27.234719 | orchestrator | 2025-09-11 01:06:27.234730 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-09-11 01:06:27.234740 | orchestrator | Thursday 11 September 2025 01:05:39 +0000 (0:00:03.833) 0:00:19.728 **** 2025-09-11 01:06:27.234764 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-11 01:06:27.234778 | orchestrator | 2025-09-11 01:06:27.234790 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-09-11 01:06:27.234803 | orchestrator | Thursday 11 September 2025 01:05:43 +0000 (0:00:03.419) 0:00:23.148 **** 2025-09-11 01:06:27.234815 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-09-11 01:06:27.234829 | orchestrator | 2025-09-11 01:06:27.234843 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-11 01:06:27.234856 | orchestrator | Thursday 11 September 2025 01:05:47 +0000 (0:00:04.474) 0:00:27.622 **** 2025-09-11 01:06:27.234890 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:06:27.234904 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:06:27.234917 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:06:27.234929 | orchestrator | 2025-09-11 01:06:27.234943 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-09-11 01:06:27.234956 | orchestrator | Thursday 11 September 2025 01:05:47 +0000 (0:00:00.261) 0:00:27.884 **** 2025-09-11 01:06:27.234973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-11 01:06:27.235012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-11 01:06:27.235027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-11 01:06:27.235041 | orchestrator | 2025-09-11 01:06:27.235054 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-09-11 01:06:27.235067 | orchestrator | Thursday 11 September 2025 01:05:48 +0000 (0:00:00.836) 0:00:28.721 **** 2025-09-11 01:06:27.235079 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:06:27.235092 | orchestrator | 2025-09-11 01:06:27.235105 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-09-11 01:06:27.235118 | orchestrator | Thursday 11 September 2025 01:05:48 +0000 (0:00:00.129) 0:00:28.851 **** 2025-09-11 01:06:27.235131 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:06:27.235144 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:06:27.235156 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:06:27.235166 | orchestrator | 2025-09-11 01:06:27.235177 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-11 01:06:27.235188 | orchestrator | Thursday 11 September 2025 01:05:49 +0000 (0:00:00.399) 0:00:29.250 **** 2025-09-11 01:06:27.235284 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 01:06:27.235298 | orchestrator | 2025-09-11 01:06:27.235309 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-09-11 01:06:27.235320 | orchestrator | Thursday 11 September 2025 01:05:49 +0000 (0:00:00.450) 0:00:29.701 **** 2025-09-11 01:06:27.235331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-11 01:06:27.235360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-11 01:06:27.235373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-11 01:06:27.235384 | orchestrator | 2025-09-11 01:06:27.235395 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-09-11 01:06:27.235406 | orchestrator | Thursday 11 September 2025 01:05:50 +0000 (0:00:01.331) 0:00:31.033 **** 2025-09-11 01:06:27.235417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-11 01:06:27.235429 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:06:27.235445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-11 01:06:27.235463 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:06:27.235480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-11 01:06:27.235491 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:06:27.235502 | orchestrator | 2025-09-11 01:06:27.235513 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-09-11 01:06:27.235524 | orchestrator | Thursday 11 September 2025 01:05:51 +0000 (0:00:00.619) 0:00:31.653 **** 2025-09-11 01:06:27.235535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-11 01:06:27.235546 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:06:27.235558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-11 01:06:27.235569 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:06:27.235584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-11 01:06:27.235602 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:06:27.235613 | orchestrator | 2025-09-11 01:06:27.235624 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-09-11 01:06:27.235635 | orchestrator | Thursday 11 September 2025 01:05:52 +0000 (0:00:00.616) 0:00:32.269 **** 2025-09-11 01:06:27.235651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-11 01:06:27.235663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-11 01:06:27.235674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-11 01:06:27.235685 | orchestrator | 2025-09-11 01:06:27.235696 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-09-11 01:06:27.235707 | orchestrator | Thursday 11 September 2025 01:05:53 +0000 (0:00:01.289) 0:00:33.559 **** 2025-09-11 01:06:27.235723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-11 01:06:27.235741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-11 01:06:27.235759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-11 01:06:27.235770 | orchestrator | 2025-09-11 01:06:27.235781 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-09-11 01:06:27.235792 | orchestrator | Thursday 11 September 2025 01:05:55 +0000 (0:00:02.123) 0:00:35.682 **** 2025-09-11 01:06:27.235803 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-11 01:06:27.235814 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-11 01:06:27.235825 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-11 01:06:27.235836 | orchestrator | 2025-09-11 01:06:27.235846 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-09-11 01:06:27.235857 | orchestrator | Thursday 11 September 2025 01:05:57 +0000 (0:00:01.621) 0:00:37.304 **** 2025-09-11 01:06:27.235886 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:06:27.235897 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:06:27.235907 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:06:27.235918 | orchestrator | 2025-09-11 01:06:27.235929 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-09-11 01:06:27.235940 | orchestrator | Thursday 11 September 2025 01:05:58 +0000 (0:00:01.316) 0:00:38.621 **** 2025-09-11 01:06:27.235955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-11 01:06:27.235973 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:06:27.235984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-11 01:06:27.235996 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:06:27.236013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-11 01:06:27.236025 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:06:27.236035 | orchestrator | 2025-09-11 01:06:27.236046 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-09-11 01:06:27.236057 | orchestrator | Thursday 11 September 2025 01:05:58 +0000 (0:00:00.464) 0:00:39.085 **** 2025-09-11 01:06:27.236068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-11 01:06:27.236093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-11 01:06:27.236109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-11 01:06:27.236121 | orchestrator | 2025-09-11 01:06:27.236132 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-09-11 01:06:27.236143 | orchestrator | Thursday 11 September 2025 01:06:00 +0000 (0:00:01.066) 0:00:40.151 **** 2025-09-11 01:06:27.236153 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:06:27.236164 | orchestrator | 2025-09-11 01:06:27.236175 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-09-11 01:06:27.236186 | orchestrator | Thursday 11 September 2025 01:06:02 +0000 (0:00:02.571) 0:00:42.722 **** 2025-09-11 01:06:27.236196 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:06:27.236207 | orchestrator | 2025-09-11 01:06:27.236218 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-09-11 01:06:27.236228 | orchestrator | Thursday 11 September 2025 01:06:04 +0000 (0:00:02.249) 0:00:44.972 **** 2025-09-11 01:06:27.236239 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:06:27.236250 | orchestrator | 2025-09-11 01:06:27.236261 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-11 01:06:27.236271 | orchestrator | Thursday 11 September 2025 01:06:18 +0000 (0:00:13.586) 0:00:58.558 **** 2025-09-11 01:06:27.236282 | orchestrator | 2025-09-11 01:06:27.236293 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-11 01:06:27.236304 | orchestrator | Thursday 11 September 2025 01:06:18 +0000 (0:00:00.135) 0:00:58.694 **** 2025-09-11 01:06:27.236314 | orchestrator | 2025-09-11 01:06:27.236331 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-11 01:06:27.236342 | orchestrator | Thursday 11 September 2025 01:06:18 +0000 (0:00:00.148) 0:00:58.843 **** 2025-09-11 01:06:27.236353 | orchestrator | 2025-09-11 01:06:27.236364 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-09-11 01:06:27.236375 | orchestrator | Thursday 11 September 2025 01:06:18 +0000 (0:00:00.159) 0:00:59.003 **** 2025-09-11 01:06:27.236385 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:06:27.236396 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:06:27.236407 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:06:27.236417 | orchestrator | 2025-09-11 01:06:27.236428 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 01:06:27.236447 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-11 01:06:27.236459 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-11 01:06:27.236470 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-11 01:06:27.236481 | orchestrator | 2025-09-11 01:06:27.236492 | orchestrator | 2025-09-11 01:06:27.236503 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 01:06:27.236514 | orchestrator | Thursday 11 September 2025 01:06:25 +0000 (0:00:06.494) 0:01:05.497 **** 2025-09-11 01:06:27.236524 | orchestrator | =============================================================================== 2025-09-11 01:06:27.236535 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.59s 2025-09-11 01:06:27.236546 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.33s 2025-09-11 01:06:27.236557 | orchestrator | placement : Restart placement-api container ----------------------------- 6.49s 2025-09-11 01:06:27.236567 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.47s 2025-09-11 01:06:27.236578 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.83s 2025-09-11 01:06:27.236589 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.62s 2025-09-11 01:06:27.236600 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.48s 2025-09-11 01:06:27.236610 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.42s 2025-09-11 01:06:27.236621 | orchestrator | placement : Creating placement databases -------------------------------- 2.57s 2025-09-11 01:06:27.236632 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.25s 2025-09-11 01:06:27.236643 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.12s 2025-09-11 01:06:27.236653 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.62s 2025-09-11 01:06:27.236664 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.33s 2025-09-11 01:06:27.236675 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.32s 2025-09-11 01:06:27.236685 | orchestrator | placement : Copying over config.json files for services ----------------- 1.29s 2025-09-11 01:06:27.236700 | orchestrator | placement : Check placement containers ---------------------------------- 1.07s 2025-09-11 01:06:27.236711 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.84s 2025-09-11 01:06:27.236722 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.62s 2025-09-11 01:06:27.236733 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.62s 2025-09-11 01:06:27.236780 | orchestrator | placement : include_tasks ----------------------------------------------- 0.52s 2025-09-11 01:06:27.236905 | orchestrator | 2025-09-11 01:06:27 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:06:27.236921 | orchestrator | 2025-09-11 01:06:27 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:06:27.238346 | orchestrator | 2025-09-11 01:06:27 | INFO  | Task 157caa23-7299-412d-b681-018fc73dfcda is in state STARTED 2025-09-11 01:06:27.238369 | orchestrator | 2025-09-11 01:06:27 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:06:30.275831 | orchestrator | 2025-09-11 01:06:30 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:06:30.276623 | orchestrator | 2025-09-11 01:06:30 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:06:30.277883 | orchestrator | 2025-09-11 01:06:30 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:06:30.279490 | orchestrator | 2025-09-11 01:06:30 | INFO  | Task 157caa23-7299-412d-b681-018fc73dfcda is in state STARTED 2025-09-11 01:06:30.279520 | orchestrator | 2025-09-11 01:06:30 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:06:33.320105 | orchestrator | 2025-09-11 01:06:33 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:06:33.321476 | orchestrator | 2025-09-11 01:06:33 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:06:33.323958 | orchestrator | 2025-09-11 01:06:33 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:06:33.326692 | orchestrator | 2025-09-11 01:06:33 | INFO  | Task 157caa23-7299-412d-b681-018fc73dfcda is in state STARTED 2025-09-11 01:06:33.326729 | orchestrator | 2025-09-11 01:06:33 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:06:36.357144 | orchestrator | 2025-09-11 01:06:36 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:06:36.357480 | orchestrator | 2025-09-11 01:06:36 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:06:36.358295 | orchestrator | 2025-09-11 01:06:36 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:06:36.359135 | orchestrator | 2025-09-11 01:06:36 | INFO  | Task 157caa23-7299-412d-b681-018fc73dfcda is in state STARTED 2025-09-11 01:06:36.359166 | orchestrator | 2025-09-11 01:06:36 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:06:39.393298 | orchestrator | 2025-09-11 01:06:39 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:06:39.394068 | orchestrator | 2025-09-11 01:06:39 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:06:39.395658 | orchestrator | 2025-09-11 01:06:39 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:06:39.397237 | orchestrator | 2025-09-11 01:06:39 | INFO  | Task 157caa23-7299-412d-b681-018fc73dfcda is in state STARTED 2025-09-11 01:06:39.397266 | orchestrator | 2025-09-11 01:06:39 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:06:42.429443 | orchestrator | 2025-09-11 01:06:42 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:06:42.429684 | orchestrator | 2025-09-11 01:06:42 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:06:42.430568 | orchestrator | 2025-09-11 01:06:42 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:06:42.431332 | orchestrator | 2025-09-11 01:06:42 | INFO  | Task 157caa23-7299-412d-b681-018fc73dfcda is in state STARTED 2025-09-11 01:06:42.431916 | orchestrator | 2025-09-11 01:06:42 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:06:45.467911 | orchestrator | 2025-09-11 01:06:45 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:06:45.468127 | orchestrator | 2025-09-11 01:06:45 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:06:45.468944 | orchestrator | 2025-09-11 01:06:45 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:06:45.470115 | orchestrator | 2025-09-11 01:06:45 | INFO  | Task 157caa23-7299-412d-b681-018fc73dfcda is in state STARTED 2025-09-11 01:06:45.470134 | orchestrator | 2025-09-11 01:06:45 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:06:48.508727 | orchestrator | 2025-09-11 01:06:48 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:06:48.511514 | orchestrator | 2025-09-11 01:06:48 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:06:48.515530 | orchestrator | 2025-09-11 01:06:48 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:06:48.517901 | orchestrator | 2025-09-11 01:06:48 | INFO  | Task 157caa23-7299-412d-b681-018fc73dfcda is in state STARTED 2025-09-11 01:06:48.518495 | orchestrator | 2025-09-11 01:06:48 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:06:51.558734 | orchestrator | 2025-09-11 01:06:51 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:06:51.560934 | orchestrator | 2025-09-11 01:06:51 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:06:51.563876 | orchestrator | 2025-09-11 01:06:51 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:06:51.566121 | orchestrator | 2025-09-11 01:06:51 | INFO  | Task 157caa23-7299-412d-b681-018fc73dfcda is in state STARTED 2025-09-11 01:06:51.566150 | orchestrator | 2025-09-11 01:06:51 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:06:54.604096 | orchestrator | 2025-09-11 01:06:54 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:06:54.605567 | orchestrator | 2025-09-11 01:06:54 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:06:54.608586 | orchestrator | 2025-09-11 01:06:54 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:06:54.610295 | orchestrator | 2025-09-11 01:06:54 | INFO  | Task 157caa23-7299-412d-b681-018fc73dfcda is in state STARTED 2025-09-11 01:06:54.610424 | orchestrator | 2025-09-11 01:06:54 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:06:57.652457 | orchestrator | 2025-09-11 01:06:57 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:06:57.653713 | orchestrator | 2025-09-11 01:06:57 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:06:57.655345 | orchestrator | 2025-09-11 01:06:57 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:06:57.658668 | orchestrator | 2025-09-11 01:06:57 | INFO  | Task 157caa23-7299-412d-b681-018fc73dfcda is in state STARTED 2025-09-11 01:06:57.658694 | orchestrator | 2025-09-11 01:06:57 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:07:00.687072 | orchestrator | 2025-09-11 01:07:00 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:07:00.687723 | orchestrator | 2025-09-11 01:07:00 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:07:00.688701 | orchestrator | 2025-09-11 01:07:00 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:07:00.690607 | orchestrator | 2025-09-11 01:07:00 | INFO  | Task 157caa23-7299-412d-b681-018fc73dfcda is in state STARTED 2025-09-11 01:07:00.690654 | orchestrator | 2025-09-11 01:07:00 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:07:03.709421 | orchestrator | 2025-09-11 01:07:03 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:07:03.709701 | orchestrator | 2025-09-11 01:07:03 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:07:03.710373 | orchestrator | 2025-09-11 01:07:03 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:07:03.711042 | orchestrator | 2025-09-11 01:07:03 | INFO  | Task 157caa23-7299-412d-b681-018fc73dfcda is in state STARTED 2025-09-11 01:07:03.711326 | orchestrator | 2025-09-11 01:07:03 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:07:06.731729 | orchestrator | 2025-09-11 01:07:06 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:07:06.732152 | orchestrator | 2025-09-11 01:07:06 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:07:06.732840 | orchestrator | 2025-09-11 01:07:06 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:07:06.733579 | orchestrator | 2025-09-11 01:07:06 | INFO  | Task 157caa23-7299-412d-b681-018fc73dfcda is in state STARTED 2025-09-11 01:07:06.733603 | orchestrator | 2025-09-11 01:07:06 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:07:09.805992 | orchestrator | 2025-09-11 01:07:09 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:07:09.806130 | orchestrator | 2025-09-11 01:07:09 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:07:09.806145 | orchestrator | 2025-09-11 01:07:09 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:07:09.806156 | orchestrator | 2025-09-11 01:07:09 | INFO  | Task 157caa23-7299-412d-b681-018fc73dfcda is in state STARTED 2025-09-11 01:07:09.806167 | orchestrator | 2025-09-11 01:07:09 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:07:12.814605 | orchestrator | 2025-09-11 01:07:12 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:07:12.816707 | orchestrator | 2025-09-11 01:07:12 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:07:12.817569 | orchestrator | 2025-09-11 01:07:12 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:07:12.818889 | orchestrator | 2025-09-11 01:07:12 | INFO  | Task 157caa23-7299-412d-b681-018fc73dfcda is in state STARTED 2025-09-11 01:07:12.819187 | orchestrator | 2025-09-11 01:07:12 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:07:15.858462 | orchestrator | 2025-09-11 01:07:15 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:07:15.860762 | orchestrator | 2025-09-11 01:07:15 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:07:15.863112 | orchestrator | 2025-09-11 01:07:15 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:07:15.864900 | orchestrator | 2025-09-11 01:07:15 | INFO  | Task 4d2a32c0-bb28-4bdd-b8e8-dcf8fcf5ff70 is in state STARTED 2025-09-11 01:07:15.867132 | orchestrator | 2025-09-11 01:07:15 | INFO  | Task 157caa23-7299-412d-b681-018fc73dfcda is in state SUCCESS 2025-09-11 01:07:15.869551 | orchestrator | 2025-09-11 01:07:15.869583 | orchestrator | 2025-09-11 01:07:15.869594 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-11 01:07:15.869606 | orchestrator | 2025-09-11 01:07:15.869617 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-11 01:07:15.869629 | orchestrator | Thursday 11 September 2025 01:05:25 +0000 (0:00:00.216) 0:00:00.216 **** 2025-09-11 01:07:15.869640 | orchestrator | ok: [testbed-node-0] 2025-09-11 01:07:15.869652 | orchestrator | ok: [testbed-node-1] 2025-09-11 01:07:15.869662 | orchestrator | ok: [testbed-node-2] 2025-09-11 01:07:15.869673 | orchestrator | 2025-09-11 01:07:15.869684 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-11 01:07:15.869695 | orchestrator | Thursday 11 September 2025 01:05:26 +0000 (0:00:00.258) 0:00:00.475 **** 2025-09-11 01:07:15.869705 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-09-11 01:07:15.869716 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-09-11 01:07:15.869727 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-09-11 01:07:15.869761 | orchestrator | 2025-09-11 01:07:15.869773 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-09-11 01:07:15.869783 | orchestrator | 2025-09-11 01:07:15.869820 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-11 01:07:15.869831 | orchestrator | Thursday 11 September 2025 01:05:26 +0000 (0:00:00.334) 0:00:00.809 **** 2025-09-11 01:07:15.869842 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 01:07:15.869853 | orchestrator | 2025-09-11 01:07:15.869864 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-09-11 01:07:15.869875 | orchestrator | Thursday 11 September 2025 01:05:26 +0000 (0:00:00.461) 0:00:01.271 **** 2025-09-11 01:07:15.869886 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-09-11 01:07:15.869897 | orchestrator | 2025-09-11 01:07:15.869908 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-09-11 01:07:15.869999 | orchestrator | Thursday 11 September 2025 01:05:30 +0000 (0:00:03.606) 0:00:04.877 **** 2025-09-11 01:07:15.870012 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-09-11 01:07:15.870104 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-09-11 01:07:15.870116 | orchestrator | 2025-09-11 01:07:15.870127 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-09-11 01:07:15.870137 | orchestrator | Thursday 11 September 2025 01:05:37 +0000 (0:00:07.098) 0:00:11.976 **** 2025-09-11 01:07:15.870148 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-11 01:07:15.870161 | orchestrator | 2025-09-11 01:07:15.870174 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-09-11 01:07:15.870188 | orchestrator | Thursday 11 September 2025 01:05:40 +0000 (0:00:03.037) 0:00:15.013 **** 2025-09-11 01:07:15.870200 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-11 01:07:15.870212 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-09-11 01:07:15.870225 | orchestrator | 2025-09-11 01:07:15.870250 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-09-11 01:07:15.870264 | orchestrator | Thursday 11 September 2025 01:05:44 +0000 (0:00:04.038) 0:00:19.052 **** 2025-09-11 01:07:15.870277 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-11 01:07:15.870289 | orchestrator | 2025-09-11 01:07:15.870301 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-09-11 01:07:15.870313 | orchestrator | Thursday 11 September 2025 01:05:48 +0000 (0:00:03.681) 0:00:22.733 **** 2025-09-11 01:07:15.870325 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-09-11 01:07:15.870337 | orchestrator | 2025-09-11 01:07:15.870349 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-09-11 01:07:15.870362 | orchestrator | Thursday 11 September 2025 01:05:52 +0000 (0:00:04.146) 0:00:26.880 **** 2025-09-11 01:07:15.870375 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:07:15.870387 | orchestrator | 2025-09-11 01:07:15.870400 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-09-11 01:07:15.870412 | orchestrator | Thursday 11 September 2025 01:05:55 +0000 (0:00:03.374) 0:00:30.254 **** 2025-09-11 01:07:15.870424 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:07:15.870436 | orchestrator | 2025-09-11 01:07:15.870448 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-09-11 01:07:15.870460 | orchestrator | Thursday 11 September 2025 01:06:00 +0000 (0:00:04.120) 0:00:34.375 **** 2025-09-11 01:07:15.870473 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:07:15.870486 | orchestrator | 2025-09-11 01:07:15.870498 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-09-11 01:07:15.870510 | orchestrator | Thursday 11 September 2025 01:06:03 +0000 (0:00:03.594) 0:00:37.970 **** 2025-09-11 01:07:15.870547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-11 01:07:15.870562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-11 01:07:15.870574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-11 01:07:15.870591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:15.870604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:15.870629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:15.870641 | orchestrator | 2025-09-11 01:07:15.870653 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-09-11 01:07:15.870664 | orchestrator | Thursday 11 September 2025 01:06:05 +0000 (0:00:01.401) 0:00:39.371 **** 2025-09-11 01:07:15.870674 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:15.870685 | orchestrator | 2025-09-11 01:07:15.870696 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-09-11 01:07:15.870707 | orchestrator | Thursday 11 September 2025 01:06:05 +0000 (0:00:00.135) 0:00:39.507 **** 2025-09-11 01:07:15.870718 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:15.870729 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:15.870739 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:15.870750 | orchestrator | 2025-09-11 01:07:15.870761 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-09-11 01:07:15.870771 | orchestrator | Thursday 11 September 2025 01:06:05 +0000 (0:00:00.442) 0:00:39.949 **** 2025-09-11 01:07:15.870782 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-11 01:07:15.870811 | orchestrator | 2025-09-11 01:07:15.870823 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-09-11 01:07:15.870833 | orchestrator | Thursday 11 September 2025 01:06:06 +0000 (0:00:00.779) 0:00:40.728 **** 2025-09-11 01:07:15.870845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-11 01:07:15.870862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-11 01:07:15.870881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-11 01:07:15.870901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:15.870913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:15.870925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:15.870936 | orchestrator | 2025-09-11 01:07:15.870947 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-09-11 01:07:15.870958 | orchestrator | Thursday 11 September 2025 01:06:08 +0000 (0:00:02.273) 0:00:43.002 **** 2025-09-11 01:07:15.870969 | orchestrator | ok: [testbed-node-0] 2025-09-11 01:07:15.870980 | orchestrator | ok: [testbed-node-1] 2025-09-11 01:07:15.870991 | orchestrator | ok: [testbed-node-2] 2025-09-11 01:07:15.871002 | orchestrator | 2025-09-11 01:07:15.871012 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-11 01:07:15.871023 | orchestrator | Thursday 11 September 2025 01:06:08 +0000 (0:00:00.255) 0:00:43.258 **** 2025-09-11 01:07:15.871038 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 01:07:15.871055 | orchestrator | 2025-09-11 01:07:15.871066 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-09-11 01:07:15.871077 | orchestrator | Thursday 11 September 2025 01:06:09 +0000 (0:00:00.629) 0:00:43.887 **** 2025-09-11 01:07:15.871088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-11 01:07:15.871106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-11 01:07:15.871117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-11 01:07:15.871129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:15.871144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:15.871162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:15.871173 | orchestrator | 2025-09-11 01:07:15.871184 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-09-11 01:07:15.871195 | orchestrator | Thursday 11 September 2025 01:06:12 +0000 (0:00:02.466) 0:00:46.353 **** 2025-09-11 01:07:15.871212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-11 01:07:15.871224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-11 01:07:15.871236 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:15.871248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-11 01:07:15.871269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-11 01:07:15.871281 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:15.871292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-11 01:07:15.871309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-11 01:07:15.871321 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:15.871332 | orchestrator | 2025-09-11 01:07:15.871343 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-09-11 01:07:15.871354 | orchestrator | Thursday 11 September 2025 01:06:12 +0000 (0:00:00.561) 0:00:46.914 **** 2025-09-11 01:07:15.871365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-11 01:07:15.871376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-11 01:07:15.871399 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:15.871415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-11 01:07:15.871427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-11 01:07:15.871438 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:15.871456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-11 01:07:15.871468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-11 01:07:15.871491 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:15.871509 | orchestrator | 2025-09-11 01:07:15.871528 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-09-11 01:07:15.871547 | orchestrator | Thursday 11 September 2025 01:06:13 +0000 (0:00:00.799) 0:00:47.714 **** 2025-09-11 01:07:15.871573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-11 01:07:15.871593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-11 01:07:15.871622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-11 01:07:15.871643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:15.871664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:15.871701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:15.871714 | orchestrator | 2025-09-11 01:07:15.871725 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-09-11 01:07:15.871736 | orchestrator | Thursday 11 September 2025 01:06:15 +0000 (0:00:02.442) 0:00:50.156 **** 2025-09-11 01:07:15.871747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-11 01:07:15.871765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-11 01:07:15.871777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-11 01:07:15.871823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:15.871841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:15.871852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:15.871863 | orchestrator | 2025-09-11 01:07:15.871874 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-09-11 01:07:15.871885 | orchestrator | Thursday 11 September 2025 01:06:22 +0000 (0:00:06.543) 0:00:56.700 **** 2025-09-11 01:07:15.871903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-11 01:07:15.871915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-11 01:07:15.871932 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:15.871943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-11 01:07:15.871960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-11 01:07:15.871971 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:15.871982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-11 01:07:15.872000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-11 01:07:15.872012 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:15.872023 | orchestrator | 2025-09-11 01:07:15.872034 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-09-11 01:07:15.872051 | orchestrator | Thursday 11 September 2025 01:06:23 +0000 (0:00:00.613) 0:00:57.314 **** 2025-09-11 01:07:15.872062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-11 01:07:15.872078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-11 01:07:15.872090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-11 01:07:15.872102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:15.872120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:15.872138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:15.872149 | orchestrator | 2025-09-11 01:07:15.872160 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-11 01:07:15.872171 | orchestrator | Thursday 11 September 2025 01:06:25 +0000 (0:00:02.448) 0:00:59.762 **** 2025-09-11 01:07:15.872182 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:15.872193 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:15.872204 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:15.872215 | orchestrator | 2025-09-11 01:07:15.872226 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-09-11 01:07:15.872237 | orchestrator | Thursday 11 September 2025 01:06:25 +0000 (0:00:00.237) 0:01:00.000 **** 2025-09-11 01:07:15.872248 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:07:15.872259 | orchestrator | 2025-09-11 01:07:15.872269 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-09-11 01:07:15.872280 | orchestrator | Thursday 11 September 2025 01:06:28 +0000 (0:00:02.413) 0:01:02.414 **** 2025-09-11 01:07:15.872291 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:07:15.872302 | orchestrator | 2025-09-11 01:07:15.872313 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-09-11 01:07:15.872324 | orchestrator | Thursday 11 September 2025 01:06:30 +0000 (0:00:02.414) 0:01:04.828 **** 2025-09-11 01:07:15.872334 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:07:15.872345 | orchestrator | 2025-09-11 01:07:15.872360 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-11 01:07:15.872371 | orchestrator | Thursday 11 September 2025 01:06:44 +0000 (0:00:13.937) 0:01:18.766 **** 2025-09-11 01:07:15.872382 | orchestrator | 2025-09-11 01:07:15.872393 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-11 01:07:15.872404 | orchestrator | Thursday 11 September 2025 01:06:44 +0000 (0:00:00.060) 0:01:18.826 **** 2025-09-11 01:07:15.872415 | orchestrator | 2025-09-11 01:07:15.872426 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-11 01:07:15.872437 | orchestrator | Thursday 11 September 2025 01:06:44 +0000 (0:00:00.059) 0:01:18.885 **** 2025-09-11 01:07:15.872447 | orchestrator | 2025-09-11 01:07:15.872458 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-09-11 01:07:15.872469 | orchestrator | Thursday 11 September 2025 01:06:44 +0000 (0:00:00.059) 0:01:18.945 **** 2025-09-11 01:07:15.872480 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:07:15.872490 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:07:15.872501 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:07:15.872512 | orchestrator | 2025-09-11 01:07:15.872523 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-09-11 01:07:15.872534 | orchestrator | Thursday 11 September 2025 01:06:57 +0000 (0:00:13.153) 0:01:32.098 **** 2025-09-11 01:07:15.872545 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:07:15.872555 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:07:15.872566 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:07:15.872583 | orchestrator | 2025-09-11 01:07:15.872594 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 01:07:15.872605 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-11 01:07:15.872617 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-11 01:07:15.872627 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-11 01:07:15.872638 | orchestrator | 2025-09-11 01:07:15.872649 | orchestrator | 2025-09-11 01:07:15.872660 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 01:07:15.872670 | orchestrator | Thursday 11 September 2025 01:07:13 +0000 (0:00:15.299) 0:01:47.398 **** 2025-09-11 01:07:15.872681 | orchestrator | =============================================================================== 2025-09-11 01:07:15.872692 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 15.30s 2025-09-11 01:07:15.872709 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 13.94s 2025-09-11 01:07:15.872720 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 13.15s 2025-09-11 01:07:15.872731 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.10s 2025-09-11 01:07:15.872742 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 6.54s 2025-09-11 01:07:15.872753 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.15s 2025-09-11 01:07:15.872763 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.12s 2025-09-11 01:07:15.872774 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.04s 2025-09-11 01:07:15.872785 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.68s 2025-09-11 01:07:15.872833 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.61s 2025-09-11 01:07:15.872845 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.59s 2025-09-11 01:07:15.872855 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.37s 2025-09-11 01:07:15.872866 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.04s 2025-09-11 01:07:15.872877 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.47s 2025-09-11 01:07:15.872888 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.45s 2025-09-11 01:07:15.872898 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.44s 2025-09-11 01:07:15.872909 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.41s 2025-09-11 01:07:15.872920 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.41s 2025-09-11 01:07:15.872931 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.27s 2025-09-11 01:07:15.872941 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.40s 2025-09-11 01:07:15.872952 | orchestrator | 2025-09-11 01:07:15 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:07:18.896300 | orchestrator | 2025-09-11 01:07:18 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:07:18.898689 | orchestrator | 2025-09-11 01:07:18 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:07:18.898746 | orchestrator | 2025-09-11 01:07:18 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:07:18.898770 | orchestrator | 2025-09-11 01:07:18 | INFO  | Task 4d2a32c0-bb28-4bdd-b8e8-dcf8fcf5ff70 is in state SUCCESS 2025-09-11 01:07:18.898828 | orchestrator | 2025-09-11 01:07:18 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:07:21.947948 | orchestrator | 2025-09-11 01:07:21 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:07:21.950438 | orchestrator | 2025-09-11 01:07:21 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:07:21.953818 | orchestrator | 2025-09-11 01:07:21 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:07:21.955452 | orchestrator | 2025-09-11 01:07:21 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:07:21.955852 | orchestrator | 2025-09-11 01:07:21 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:07:25.021652 | orchestrator | 2025-09-11 01:07:25 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:07:25.021734 | orchestrator | 2025-09-11 01:07:25 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:07:25.021765 | orchestrator | 2025-09-11 01:07:25 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:07:25.022186 | orchestrator | 2025-09-11 01:07:25 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:07:25.022574 | orchestrator | 2025-09-11 01:07:25 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:07:28.071100 | orchestrator | 2025-09-11 01:07:28 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:07:28.073006 | orchestrator | 2025-09-11 01:07:28 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:07:28.074300 | orchestrator | 2025-09-11 01:07:28 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:07:28.075850 | orchestrator | 2025-09-11 01:07:28 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:07:28.075870 | orchestrator | 2025-09-11 01:07:28 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:07:31.119466 | orchestrator | 2025-09-11 01:07:31 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:07:31.121688 | orchestrator | 2025-09-11 01:07:31 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:07:31.125551 | orchestrator | 2025-09-11 01:07:31 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:07:31.127446 | orchestrator | 2025-09-11 01:07:31 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:07:31.127480 | orchestrator | 2025-09-11 01:07:31 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:07:34.172672 | orchestrator | 2025-09-11 01:07:34 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:07:34.175136 | orchestrator | 2025-09-11 01:07:34 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state STARTED 2025-09-11 01:07:34.178363 | orchestrator | 2025-09-11 01:07:34 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:07:34.180733 | orchestrator | 2025-09-11 01:07:34 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:07:34.181175 | orchestrator | 2025-09-11 01:07:34 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:07:37.223458 | orchestrator | 2025-09-11 01:07:37 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:07:37.224760 | orchestrator | 2025-09-11 01:07:37 | INFO  | Task 5ac7a703-2600-4a43-a239-ba561a9e198c is in state SUCCESS 2025-09-11 01:07:37.226496 | orchestrator | 2025-09-11 01:07:37 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:07:37.227737 | orchestrator | 2025-09-11 01:07:37 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:07:37.227758 | orchestrator | 2025-09-11 01:07:37 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:07:40.276247 | orchestrator | 2025-09-11 01:07:40 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:07:40.278842 | orchestrator | 2025-09-11 01:07:40 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:07:40.281473 | orchestrator | 2025-09-11 01:07:40 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:07:40.281504 | orchestrator | 2025-09-11 01:07:40 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:07:43.321730 | orchestrator | 2025-09-11 01:07:43 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:07:43.322114 | orchestrator | 2025-09-11 01:07:43 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:07:43.323447 | orchestrator | 2025-09-11 01:07:43 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state STARTED 2025-09-11 01:07:43.323840 | orchestrator | 2025-09-11 01:07:43 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:07:46.362012 | orchestrator | 2025-09-11 01:07:46 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:07:46.363120 | orchestrator | 2025-09-11 01:07:46 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:07:46.366902 | orchestrator | 2025-09-11 01:07:46 | INFO  | Task 526cd17d-768e-4028-a037-9bf05be25ad2 is in state SUCCESS 2025-09-11 01:07:46.371267 | orchestrator | 2025-09-11 01:07:46.371302 | orchestrator | 2025-09-11 01:07:46.371314 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-11 01:07:46.371326 | orchestrator | 2025-09-11 01:07:46.371337 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-11 01:07:46.371349 | orchestrator | Thursday 11 September 2025 01:07:16 +0000 (0:00:00.164) 0:00:00.164 **** 2025-09-11 01:07:46.371360 | orchestrator | ok: [testbed-node-0] 2025-09-11 01:07:46.371372 | orchestrator | ok: [testbed-node-1] 2025-09-11 01:07:46.371383 | orchestrator | ok: [testbed-node-2] 2025-09-11 01:07:46.371393 | orchestrator | 2025-09-11 01:07:46.371454 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-11 01:07:46.371466 | orchestrator | Thursday 11 September 2025 01:07:17 +0000 (0:00:00.285) 0:00:00.450 **** 2025-09-11 01:07:46.371477 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-09-11 01:07:46.371488 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-09-11 01:07:46.371499 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-09-11 01:07:46.371510 | orchestrator | 2025-09-11 01:07:46.371521 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-09-11 01:07:46.371532 | orchestrator | 2025-09-11 01:07:46.371543 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-09-11 01:07:46.371553 | orchestrator | Thursday 11 September 2025 01:07:17 +0000 (0:00:00.521) 0:00:00.971 **** 2025-09-11 01:07:46.371564 | orchestrator | ok: [testbed-node-1] 2025-09-11 01:07:46.371577 | orchestrator | ok: [testbed-node-2] 2025-09-11 01:07:46.371588 | orchestrator | ok: [testbed-node-0] 2025-09-11 01:07:46.371598 | orchestrator | 2025-09-11 01:07:46.371609 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 01:07:46.371621 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 01:07:46.371634 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 01:07:46.371644 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 01:07:46.371680 | orchestrator | 2025-09-11 01:07:46.371691 | orchestrator | 2025-09-11 01:07:46.371702 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 01:07:46.371713 | orchestrator | Thursday 11 September 2025 01:07:18 +0000 (0:00:00.645) 0:00:01.617 **** 2025-09-11 01:07:46.371724 | orchestrator | =============================================================================== 2025-09-11 01:07:46.371735 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.65s 2025-09-11 01:07:46.371745 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.52s 2025-09-11 01:07:46.371787 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2025-09-11 01:07:46.371798 | orchestrator | 2025-09-11 01:07:46.371809 | orchestrator | 2025-09-11 01:07:46.371820 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-09-11 01:07:46.371972 | orchestrator | 2025-09-11 01:07:46.371986 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-09-11 01:07:46.371998 | orchestrator | Thursday 11 September 2025 01:03:51 +0000 (0:00:00.086) 0:00:00.086 **** 2025-09-11 01:07:46.372011 | orchestrator | changed: [localhost] 2025-09-11 01:07:46.372024 | orchestrator | 2025-09-11 01:07:46.372036 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-09-11 01:07:46.372049 | orchestrator | Thursday 11 September 2025 01:03:52 +0000 (0:00:00.719) 0:00:00.805 **** 2025-09-11 01:07:46.372061 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2025-09-11 01:07:46.372074 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (2 retries left). 2025-09-11 01:07:46.372086 | orchestrator | changed: [localhost] 2025-09-11 01:07:46.372099 | orchestrator | 2025-09-11 01:07:46.372111 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-09-11 01:07:46.372124 | orchestrator | Thursday 11 September 2025 01:05:06 +0000 (0:01:14.428) 0:01:15.233 **** 2025-09-11 01:07:46.372136 | orchestrator | 2025-09-11 01:07:46.372149 | orchestrator | STILL ALIVE [task 'Download ironic-agent kernel' is running] ******************* 2025-09-11 01:07:46.372161 | orchestrator | changed: [localhost] 2025-09-11 01:07:46.372174 | orchestrator | 2025-09-11 01:07:46.372186 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-11 01:07:46.372198 | orchestrator | 2025-09-11 01:07:46.372212 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-11 01:07:46.372225 | orchestrator | Thursday 11 September 2025 01:07:33 +0000 (0:02:26.732) 0:03:41.965 **** 2025-09-11 01:07:46.372236 | orchestrator | ok: [testbed-node-0] 2025-09-11 01:07:46.372246 | orchestrator | ok: [testbed-node-1] 2025-09-11 01:07:46.372257 | orchestrator | ok: [testbed-node-2] 2025-09-11 01:07:46.372268 | orchestrator | 2025-09-11 01:07:46.372278 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-11 01:07:46.372289 | orchestrator | Thursday 11 September 2025 01:07:33 +0000 (0:00:00.280) 0:03:42.245 **** 2025-09-11 01:07:46.372311 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-09-11 01:07:46.372322 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-09-11 01:07:46.372333 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-09-11 01:07:46.372344 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-09-11 01:07:46.372355 | orchestrator | 2025-09-11 01:07:46.372366 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-09-11 01:07:46.372377 | orchestrator | skipping: no hosts matched 2025-09-11 01:07:46.372389 | orchestrator | 2025-09-11 01:07:46.372400 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 01:07:46.372410 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 01:07:46.372435 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 01:07:46.372456 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 01:07:46.372467 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 01:07:46.372477 | orchestrator | 2025-09-11 01:07:46.372488 | orchestrator | 2025-09-11 01:07:46.372499 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 01:07:46.372510 | orchestrator | Thursday 11 September 2025 01:07:34 +0000 (0:00:00.426) 0:03:42.672 **** 2025-09-11 01:07:46.372521 | orchestrator | =============================================================================== 2025-09-11 01:07:46.372531 | orchestrator | Download ironic-agent kernel ------------------------------------------ 146.73s 2025-09-11 01:07:46.372542 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 74.43s 2025-09-11 01:07:46.372553 | orchestrator | Ensure the destination directory exists --------------------------------- 0.72s 2025-09-11 01:07:46.372563 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2025-09-11 01:07:46.372574 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2025-09-11 01:07:46.372584 | orchestrator | 2025-09-11 01:07:46.372595 | orchestrator | 2025-09-11 01:07:46.372606 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-11 01:07:46.372616 | orchestrator | 2025-09-11 01:07:46.372627 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-09-11 01:07:46.372638 | orchestrator | Thursday 11 September 2025 00:59:18 +0000 (0:00:00.210) 0:00:00.210 **** 2025-09-11 01:07:46.372648 | orchestrator | changed: [testbed-manager] 2025-09-11 01:07:46.372674 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:07:46.372686 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:07:46.372710 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:07:46.372720 | orchestrator | changed: [testbed-node-3] 2025-09-11 01:07:46.372786 | orchestrator | changed: [testbed-node-4] 2025-09-11 01:07:46.372798 | orchestrator | changed: [testbed-node-5] 2025-09-11 01:07:46.372809 | orchestrator | 2025-09-11 01:07:46.372820 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-11 01:07:46.372842 | orchestrator | Thursday 11 September 2025 00:59:18 +0000 (0:00:00.595) 0:00:00.805 **** 2025-09-11 01:07:46.372853 | orchestrator | changed: [testbed-manager] 2025-09-11 01:07:46.372864 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:07:46.372875 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:07:46.372885 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:07:46.372896 | orchestrator | changed: [testbed-node-3] 2025-09-11 01:07:46.372943 | orchestrator | changed: [testbed-node-4] 2025-09-11 01:07:46.372954 | orchestrator | changed: [testbed-node-5] 2025-09-11 01:07:46.372965 | orchestrator | 2025-09-11 01:07:46.373021 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-11 01:07:46.373033 | orchestrator | Thursday 11 September 2025 00:59:19 +0000 (0:00:00.578) 0:00:01.384 **** 2025-09-11 01:07:46.373044 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-09-11 01:07:46.373136 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-09-11 01:07:46.373147 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-09-11 01:07:46.373157 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-09-11 01:07:46.373168 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-09-11 01:07:46.373179 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-09-11 01:07:46.373217 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-09-11 01:07:46.373228 | orchestrator | 2025-09-11 01:07:46.373239 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-09-11 01:07:46.373250 | orchestrator | 2025-09-11 01:07:46.373269 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-11 01:07:46.373280 | orchestrator | Thursday 11 September 2025 00:59:19 +0000 (0:00:00.667) 0:00:02.052 **** 2025-09-11 01:07:46.373291 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 01:07:46.373301 | orchestrator | 2025-09-11 01:07:46.373312 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-09-11 01:07:46.373323 | orchestrator | Thursday 11 September 2025 00:59:20 +0000 (0:00:00.582) 0:00:02.634 **** 2025-09-11 01:07:46.373333 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-09-11 01:07:46.373344 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-09-11 01:07:46.373355 | orchestrator | 2025-09-11 01:07:46.373366 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-09-11 01:07:46.373377 | orchestrator | Thursday 11 September 2025 00:59:25 +0000 (0:00:04.438) 0:00:07.072 **** 2025-09-11 01:07:46.373387 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-11 01:07:46.373398 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-11 01:07:46.373409 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:07:46.373420 | orchestrator | 2025-09-11 01:07:46.373431 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-11 01:07:46.373442 | orchestrator | Thursday 11 September 2025 00:59:29 +0000 (0:00:04.149) 0:00:11.222 **** 2025-09-11 01:07:46.373453 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:07:46.373463 | orchestrator | 2025-09-11 01:07:46.373474 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-09-11 01:07:46.373485 | orchestrator | Thursday 11 September 2025 00:59:29 +0000 (0:00:00.576) 0:00:11.798 **** 2025-09-11 01:07:46.373496 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:07:46.373506 | orchestrator | 2025-09-11 01:07:46.373517 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-09-11 01:07:46.373528 | orchestrator | Thursday 11 September 2025 00:59:31 +0000 (0:00:01.530) 0:00:13.328 **** 2025-09-11 01:07:46.373539 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:07:46.373549 | orchestrator | 2025-09-11 01:07:46.373568 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-11 01:07:46.373579 | orchestrator | Thursday 11 September 2025 00:59:34 +0000 (0:00:02.837) 0:00:16.166 **** 2025-09-11 01:07:46.373590 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.373600 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.373611 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.373622 | orchestrator | 2025-09-11 01:07:46.373633 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-11 01:07:46.373644 | orchestrator | Thursday 11 September 2025 00:59:34 +0000 (0:00:00.478) 0:00:16.645 **** 2025-09-11 01:07:46.373668 | orchestrator | ok: [testbed-node-0] 2025-09-11 01:07:46.373679 | orchestrator | 2025-09-11 01:07:46.373690 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-09-11 01:07:46.373701 | orchestrator | Thursday 11 September 2025 01:00:06 +0000 (0:00:31.685) 0:00:48.331 **** 2025-09-11 01:07:46.373712 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:07:46.373723 | orchestrator | 2025-09-11 01:07:46.373777 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-11 01:07:46.373813 | orchestrator | Thursday 11 September 2025 01:00:20 +0000 (0:00:13.950) 0:01:02.281 **** 2025-09-11 01:07:46.373824 | orchestrator | ok: [testbed-node-0] 2025-09-11 01:07:46.373834 | orchestrator | 2025-09-11 01:07:46.373845 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-11 01:07:46.373856 | orchestrator | Thursday 11 September 2025 01:00:32 +0000 (0:00:12.719) 0:01:15.000 **** 2025-09-11 01:07:46.373866 | orchestrator | ok: [testbed-node-0] 2025-09-11 01:07:46.373877 | orchestrator | 2025-09-11 01:07:46.373940 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-09-11 01:07:46.373951 | orchestrator | Thursday 11 September 2025 01:00:33 +0000 (0:00:01.044) 0:01:16.045 **** 2025-09-11 01:07:46.373970 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.373981 | orchestrator | 2025-09-11 01:07:46.373992 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-11 01:07:46.374003 | orchestrator | Thursday 11 September 2025 01:00:34 +0000 (0:00:00.483) 0:01:16.528 **** 2025-09-11 01:07:46.374014 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 01:07:46.374073 | orchestrator | 2025-09-11 01:07:46.374084 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-11 01:07:46.374095 | orchestrator | Thursday 11 September 2025 01:00:34 +0000 (0:00:00.523) 0:01:17.052 **** 2025-09-11 01:07:46.374106 | orchestrator | ok: [testbed-node-0] 2025-09-11 01:07:46.374116 | orchestrator | 2025-09-11 01:07:46.374127 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-11 01:07:46.374138 | orchestrator | Thursday 11 September 2025 01:00:52 +0000 (0:00:17.776) 0:01:34.828 **** 2025-09-11 01:07:46.374149 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.374160 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.374171 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.374182 | orchestrator | 2025-09-11 01:07:46.374193 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-09-11 01:07:46.374203 | orchestrator | 2025-09-11 01:07:46.374214 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-11 01:07:46.374225 | orchestrator | Thursday 11 September 2025 01:00:53 +0000 (0:00:00.512) 0:01:35.341 **** 2025-09-11 01:07:46.374236 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 01:07:46.374246 | orchestrator | 2025-09-11 01:07:46.374257 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-09-11 01:07:46.374268 | orchestrator | Thursday 11 September 2025 01:00:53 +0000 (0:00:00.661) 0:01:36.003 **** 2025-09-11 01:07:46.374278 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.374289 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.374300 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:07:46.374311 | orchestrator | 2025-09-11 01:07:46.374321 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-09-11 01:07:46.374332 | orchestrator | Thursday 11 September 2025 01:00:56 +0000 (0:00:02.229) 0:01:38.232 **** 2025-09-11 01:07:46.374343 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.374354 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.374365 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:07:46.374375 | orchestrator | 2025-09-11 01:07:46.374386 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-11 01:07:46.374397 | orchestrator | Thursday 11 September 2025 01:00:58 +0000 (0:00:02.157) 0:01:40.390 **** 2025-09-11 01:07:46.374408 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.374418 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.374429 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.374440 | orchestrator | 2025-09-11 01:07:46.374450 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-11 01:07:46.374461 | orchestrator | Thursday 11 September 2025 01:00:58 +0000 (0:00:00.301) 0:01:40.691 **** 2025-09-11 01:07:46.374472 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-11 01:07:46.374483 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.374493 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-11 01:07:46.374504 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.374521 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-11 01:07:46.374532 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-09-11 01:07:46.374543 | orchestrator | 2025-09-11 01:07:46.374554 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-11 01:07:46.374564 | orchestrator | Thursday 11 September 2025 01:01:07 +0000 (0:00:08.917) 0:01:49.609 **** 2025-09-11 01:07:46.374575 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.374593 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.374604 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.374614 | orchestrator | 2025-09-11 01:07:46.374625 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-11 01:07:46.374636 | orchestrator | Thursday 11 September 2025 01:01:07 +0000 (0:00:00.429) 0:01:50.039 **** 2025-09-11 01:07:46.374656 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-11 01:07:46.374667 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.374678 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-11 01:07:46.374688 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.374699 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-11 01:07:46.374709 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.374720 | orchestrator | 2025-09-11 01:07:46.374731 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-11 01:07:46.374742 | orchestrator | Thursday 11 September 2025 01:01:08 +0000 (0:00:00.718) 0:01:50.757 **** 2025-09-11 01:07:46.374807 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.374819 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.374830 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:07:46.374841 | orchestrator | 2025-09-11 01:07:46.374852 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-09-11 01:07:46.374862 | orchestrator | Thursday 11 September 2025 01:01:09 +0000 (0:00:00.618) 0:01:51.375 **** 2025-09-11 01:07:46.374873 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.374884 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.374894 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:07:46.374905 | orchestrator | 2025-09-11 01:07:46.374916 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-09-11 01:07:46.374927 | orchestrator | Thursday 11 September 2025 01:01:10 +0000 (0:00:01.129) 0:01:52.505 **** 2025-09-11 01:07:46.374937 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.374948 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.374959 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:07:46.374969 | orchestrator | 2025-09-11 01:07:46.374980 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-09-11 01:07:46.374991 | orchestrator | Thursday 11 September 2025 01:01:12 +0000 (0:00:01.860) 0:01:54.365 **** 2025-09-11 01:07:46.375001 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.375012 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.375023 | orchestrator | ok: [testbed-node-0] 2025-09-11 01:07:46.375033 | orchestrator | 2025-09-11 01:07:46.375044 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-11 01:07:46.375055 | orchestrator | Thursday 11 September 2025 01:01:31 +0000 (0:00:19.309) 0:02:13.675 **** 2025-09-11 01:07:46.375066 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.375076 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.375087 | orchestrator | ok: [testbed-node-0] 2025-09-11 01:07:46.375098 | orchestrator | 2025-09-11 01:07:46.375108 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-11 01:07:46.375119 | orchestrator | Thursday 11 September 2025 01:01:45 +0000 (0:00:13.611) 0:02:27.286 **** 2025-09-11 01:07:46.375130 | orchestrator | ok: [testbed-node-0] 2025-09-11 01:07:46.375140 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.375151 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.375162 | orchestrator | 2025-09-11 01:07:46.375173 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-09-11 01:07:46.375184 | orchestrator | Thursday 11 September 2025 01:01:46 +0000 (0:00:00.994) 0:02:28.281 **** 2025-09-11 01:07:46.375195 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.375205 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.375216 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:07:46.375226 | orchestrator | 2025-09-11 01:07:46.375237 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-09-11 01:07:46.375271 | orchestrator | Thursday 11 September 2025 01:01:58 +0000 (0:00:12.191) 0:02:40.472 **** 2025-09-11 01:07:46.375282 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.375293 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.375304 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.375314 | orchestrator | 2025-09-11 01:07:46.375325 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-11 01:07:46.375335 | orchestrator | Thursday 11 September 2025 01:01:59 +0000 (0:00:00.991) 0:02:41.463 **** 2025-09-11 01:07:46.375344 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.375354 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.375363 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.375373 | orchestrator | 2025-09-11 01:07:46.375382 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-09-11 01:07:46.375392 | orchestrator | 2025-09-11 01:07:46.375401 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-11 01:07:46.375411 | orchestrator | Thursday 11 September 2025 01:01:59 +0000 (0:00:00.490) 0:02:41.954 **** 2025-09-11 01:07:46.375421 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 01:07:46.375430 | orchestrator | 2025-09-11 01:07:46.375440 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-09-11 01:07:46.375450 | orchestrator | Thursday 11 September 2025 01:02:00 +0000 (0:00:00.553) 0:02:42.508 **** 2025-09-11 01:07:46.375459 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-09-11 01:07:46.375469 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-09-11 01:07:46.375478 | orchestrator | 2025-09-11 01:07:46.375488 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-09-11 01:07:46.375502 | orchestrator | Thursday 11 September 2025 01:02:03 +0000 (0:00:03.444) 0:02:45.952 **** 2025-09-11 01:07:46.375512 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-09-11 01:07:46.375522 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-09-11 01:07:46.375532 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-09-11 01:07:46.375541 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-09-11 01:07:46.375577 | orchestrator | 2025-09-11 01:07:46.375604 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-09-11 01:07:46.375615 | orchestrator | Thursday 11 September 2025 01:02:11 +0000 (0:00:07.843) 0:02:53.796 **** 2025-09-11 01:07:46.375625 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-11 01:07:46.375634 | orchestrator | 2025-09-11 01:07:46.375644 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-09-11 01:07:46.375653 | orchestrator | Thursday 11 September 2025 01:02:15 +0000 (0:00:03.545) 0:02:57.341 **** 2025-09-11 01:07:46.375663 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-11 01:07:46.375672 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-09-11 01:07:46.375682 | orchestrator | 2025-09-11 01:07:46.375691 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-09-11 01:07:46.375701 | orchestrator | Thursday 11 September 2025 01:02:19 +0000 (0:00:03.964) 0:03:01.306 **** 2025-09-11 01:07:46.375710 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-11 01:07:46.375719 | orchestrator | 2025-09-11 01:07:46.375729 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-09-11 01:07:46.375738 | orchestrator | Thursday 11 September 2025 01:02:22 +0000 (0:00:03.623) 0:03:04.929 **** 2025-09-11 01:07:46.375748 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-09-11 01:07:46.375780 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-09-11 01:07:46.375790 | orchestrator | 2025-09-11 01:07:46.375799 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-11 01:07:46.375809 | orchestrator | Thursday 11 September 2025 01:02:30 +0000 (0:00:08.116) 0:03:13.046 **** 2025-09-11 01:07:46.375823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-11 01:07:46.375844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-11 01:07:46.375865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-11 01:07:46.375883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.375895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.375905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.375915 | orchestrator | 2025-09-11 01:07:46.375925 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-09-11 01:07:46.375935 | orchestrator | Thursday 11 September 2025 01:02:33 +0000 (0:00:02.065) 0:03:15.111 **** 2025-09-11 01:07:46.375945 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.375954 | orchestrator | 2025-09-11 01:07:46.375964 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-09-11 01:07:46.375973 | orchestrator | Thursday 11 September 2025 01:02:33 +0000 (0:00:00.263) 0:03:15.375 **** 2025-09-11 01:07:46.375983 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.375992 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.376002 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.376011 | orchestrator | 2025-09-11 01:07:46.376021 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-09-11 01:07:46.376030 | orchestrator | Thursday 11 September 2025 01:02:33 +0000 (0:00:00.476) 0:03:15.852 **** 2025-09-11 01:07:46.376040 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-11 01:07:46.376049 | orchestrator | 2025-09-11 01:07:46.376059 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-09-11 01:07:46.376068 | orchestrator | Thursday 11 September 2025 01:02:34 +0000 (0:00:00.703) 0:03:16.555 **** 2025-09-11 01:07:46.376078 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.376087 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.376097 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.376106 | orchestrator | 2025-09-11 01:07:46.376116 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-11 01:07:46.376129 | orchestrator | Thursday 11 September 2025 01:02:34 +0000 (0:00:00.306) 0:03:16.862 **** 2025-09-11 01:07:46.376139 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 01:07:46.376149 | orchestrator | 2025-09-11 01:07:46.376158 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-11 01:07:46.376168 | orchestrator | Thursday 11 September 2025 01:02:35 +0000 (0:00:00.452) 0:03:17.315 **** 2025-09-11 01:07:46.376186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-11 01:07:46.376204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-11 01:07:46.376215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-11 01:07:46.376232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.376254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.376265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.376275 | orchestrator | 2025-09-11 01:07:46.376285 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-11 01:07:46.376295 | orchestrator | Thursday 11 September 2025 01:02:38 +0000 (0:00:02.913) 0:03:20.228 **** 2025-09-11 01:07:46.376305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-11 01:07:46.376316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-11 01:07:46.376326 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.376347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-11 01:07:46.376365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-11 01:07:46.376375 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.376385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-11 01:07:46.376396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-11 01:07:46.376406 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.376415 | orchestrator | 2025-09-11 01:07:46.376425 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-11 01:07:46.376434 | orchestrator | Thursday 11 September 2025 01:02:39 +0000 (0:00:01.655) 0:03:21.884 **** 2025-09-11 01:07:46.376470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-11 01:07:46.376488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-11 01:07:46.376498 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.376508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-11 01:07:46.376519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-11 01:07:46.376529 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.376543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-11 01:07:46.376568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-11 01:07:46.376578 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.376588 | orchestrator | 2025-09-11 01:07:46.376598 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-09-11 01:07:46.376607 | orchestrator | Thursday 11 September 2025 01:02:40 +0000 (0:00:00.720) 0:03:22.605 **** 2025-09-11 01:07:46.376617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-11 01:07:46.376628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-11 01:07:46.376653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-11 01:07:46.376670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.376681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.376691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.376701 | orchestrator | 2025-09-11 01:07:46.376711 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-09-11 01:07:46.376720 | orchestrator | Thursday 11 September 2025 01:02:43 +0000 (0:00:02.582) 0:03:25.187 **** 2025-09-11 01:07:46.376731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-11 01:07:46.376805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-11 01:07:46.376820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-11 01:07:46.376830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.376841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.376858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.376868 | orchestrator | 2025-09-11 01:07:46.376877 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-09-11 01:07:46.376887 | orchestrator | Thursday 11 September 2025 01:02:51 +0000 (0:00:08.387) 0:03:33.574 **** 2025-09-11 01:07:46.376909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-11 01:07:46.376920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-11 01:07:46.376930 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.376940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-11 01:07:46.376957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-11 01:07:46.376967 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.376987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-11 01:07:46.376999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-11 01:07:46.377008 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.377018 | orchestrator | 2025-09-11 01:07:46.377028 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-09-11 01:07:46.377037 | orchestrator | Thursday 11 September 2025 01:02:52 +0000 (0:00:01.297) 0:03:34.872 **** 2025-09-11 01:07:46.377047 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:07:46.377056 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:07:46.377066 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:07:46.377075 | orchestrator | 2025-09-11 01:07:46.377085 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-09-11 01:07:46.377095 | orchestrator | Thursday 11 September 2025 01:02:54 +0000 (0:00:01.853) 0:03:36.726 **** 2025-09-11 01:07:46.377104 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.377114 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.377123 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.377133 | orchestrator | 2025-09-11 01:07:46.377142 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-09-11 01:07:46.377152 | orchestrator | Thursday 11 September 2025 01:02:55 +0000 (0:00:00.562) 0:03:37.288 **** 2025-09-11 01:07:46.377162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-11 01:07:46.377183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-11 01:07:46.377202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-11 01:07:46.377213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.377229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.377239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.377249 | orchestrator | 2025-09-11 01:07:46.377259 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-11 01:07:46.377268 | orchestrator | Thursday 11 September 2025 01:02:57 +0000 (0:00:02.232) 0:03:39.521 **** 2025-09-11 01:07:46.377276 | orchestrator | 2025-09-11 01:07:46.377284 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-11 01:07:46.377291 | orchestrator | Thursday 11 September 2025 01:02:57 +0000 (0:00:00.166) 0:03:39.688 **** 2025-09-11 01:07:46.377299 | orchestrator | 2025-09-11 01:07:46.377307 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-11 01:07:46.377315 | orchestrator | Thursday 11 September 2025 01:02:57 +0000 (0:00:00.223) 0:03:39.911 **** 2025-09-11 01:07:46.377323 | orchestrator | 2025-09-11 01:07:46.377330 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-09-11 01:07:46.377338 | orchestrator | Thursday 11 September 2025 01:02:57 +0000 (0:00:00.141) 0:03:40.052 **** 2025-09-11 01:07:46.377346 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:07:46.377357 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:07:46.377365 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:07:46.377373 | orchestrator | 2025-09-11 01:07:46.377381 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-09-11 01:07:46.377389 | orchestrator | Thursday 11 September 2025 01:03:16 +0000 (0:00:18.179) 0:03:58.232 **** 2025-09-11 01:07:46.377396 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:07:46.377404 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:07:46.377412 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:07:46.377420 | orchestrator | 2025-09-11 01:07:46.377428 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-09-11 01:07:46.377435 | orchestrator | 2025-09-11 01:07:46.377443 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-11 01:07:46.377451 | orchestrator | Thursday 11 September 2025 01:03:23 +0000 (0:00:07.210) 0:04:05.442 **** 2025-09-11 01:07:46.377464 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 01:07:46.377472 | orchestrator | 2025-09-11 01:07:46.377480 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-11 01:07:46.377488 | orchestrator | Thursday 11 September 2025 01:03:24 +0000 (0:00:01.561) 0:04:07.004 **** 2025-09-11 01:07:46.377496 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:07:46.377503 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:07:46.377511 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:07:46.377519 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.377527 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.377534 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.377546 | orchestrator | 2025-09-11 01:07:46.377554 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-09-11 01:07:46.377562 | orchestrator | Thursday 11 September 2025 01:03:26 +0000 (0:00:01.299) 0:04:08.303 **** 2025-09-11 01:07:46.377570 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.377578 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.377585 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.377593 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 01:07:46.377601 | orchestrator | 2025-09-11 01:07:46.377609 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-11 01:07:46.377616 | orchestrator | Thursday 11 September 2025 01:03:27 +0000 (0:00:01.398) 0:04:09.702 **** 2025-09-11 01:07:46.377624 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-09-11 01:07:46.377632 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-09-11 01:07:46.377640 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-09-11 01:07:46.377648 | orchestrator | 2025-09-11 01:07:46.377656 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-11 01:07:46.377663 | orchestrator | Thursday 11 September 2025 01:03:28 +0000 (0:00:00.946) 0:04:10.649 **** 2025-09-11 01:07:46.377671 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-09-11 01:07:46.377679 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-09-11 01:07:46.377687 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-09-11 01:07:46.377695 | orchestrator | 2025-09-11 01:07:46.377703 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-11 01:07:46.377710 | orchestrator | Thursday 11 September 2025 01:03:30 +0000 (0:00:01.502) 0:04:12.151 **** 2025-09-11 01:07:46.377718 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-09-11 01:07:46.377726 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:07:46.377734 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-09-11 01:07:46.377742 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:07:46.377763 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-09-11 01:07:46.377771 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:07:46.377779 | orchestrator | 2025-09-11 01:07:46.377787 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-09-11 01:07:46.377795 | orchestrator | Thursday 11 September 2025 01:03:31 +0000 (0:00:01.112) 0:04:13.264 **** 2025-09-11 01:07:46.377802 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-11 01:07:46.377810 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-11 01:07:46.377818 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.377826 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-11 01:07:46.377834 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-11 01:07:46.377841 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-11 01:07:46.377849 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-11 01:07:46.377857 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.377864 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-11 01:07:46.377872 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-11 01:07:46.377880 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-11 01:07:46.377888 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.377895 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-11 01:07:46.377903 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-11 01:07:46.377911 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-11 01:07:46.377923 | orchestrator | 2025-09-11 01:07:46.377931 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-09-11 01:07:46.377939 | orchestrator | Thursday 11 September 2025 01:03:32 +0000 (0:00:01.358) 0:04:14.623 **** 2025-09-11 01:07:46.377951 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.377959 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.377966 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.377974 | orchestrator | changed: [testbed-node-3] 2025-09-11 01:07:46.377982 | orchestrator | changed: [testbed-node-4] 2025-09-11 01:07:46.377989 | orchestrator | changed: [testbed-node-5] 2025-09-11 01:07:46.377997 | orchestrator | 2025-09-11 01:07:46.378005 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-09-11 01:07:46.378013 | orchestrator | Thursday 11 September 2025 01:03:34 +0000 (0:00:01.604) 0:04:16.228 **** 2025-09-11 01:07:46.378053 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.378061 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.378069 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.378076 | orchestrator | changed: [testbed-node-5] 2025-09-11 01:07:46.378084 | orchestrator | changed: [testbed-node-3] 2025-09-11 01:07:46.378097 | orchestrator | changed: [testbed-node-4] 2025-09-11 01:07:46.378105 | orchestrator | 2025-09-11 01:07:46.378113 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-11 01:07:46.378121 | orchestrator | Thursday 11 September 2025 01:03:36 +0000 (0:00:02.438) 0:04:18.666 **** 2025-09-11 01:07:46.378130 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-11 01:07:46.378140 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-11 01:07:46.378149 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-11 01:07:46.378165 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-11 01:07:46.378178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-11 01:07:46.378193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-11 01:07:46.378202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-11 01:07:46.378210 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-11 01:07:46.378219 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-11 01:07:46.378227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.378245 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.378258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.378267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.378276 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.378285 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.378298 | orchestrator | 2025-09-11 01:07:46.378306 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-11 01:07:46.378314 | orchestrator | Thursday 11 September 2025 01:03:39 +0000 (0:00:03.396) 0:04:22.063 **** 2025-09-11 01:07:46.378322 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 01:07:46.378330 | orchestrator | 2025-09-11 01:07:46.378338 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-11 01:07:46.378346 | orchestrator | Thursday 11 September 2025 01:03:40 +0000 (0:00:00.854) 0:04:22.917 **** 2025-09-11 01:07:46.378358 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-11 01:07:46.378378 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-11 01:07:46.378386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-11 01:07:46.378395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-11 01:07:46.378403 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-11 01:07:46.378416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-11 01:07:46.378428 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-11 01:07:46.378441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.378449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.378457 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-11 01:07:46.378465 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-11 01:07:46.378478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.378486 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.378498 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.378511 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.378520 | orchestrator | 2025-09-11 01:07:46.378528 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-11 01:07:46.378536 | orchestrator | Thursday 11 September 2025 01:03:44 +0000 (0:00:03.948) 0:04:26.866 **** 2025-09-11 01:07:46.378544 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-11 01:07:46.378557 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-11 01:07:46.378566 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-11 01:07:46.378574 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:07:46.378585 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-11 01:07:46.378599 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-11 01:07:46.378607 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-11 01:07:46.378615 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:07:46.378626 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-11 01:07:46.378635 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-11 01:07:46.378647 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-11 01:07:46.378655 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:07:46.378667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-11 01:07:46.378676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-11 01:07:46.378684 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.378692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-11 01:07:46.378705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-11 01:07:46.378713 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.378721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-11 01:07:46.378729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-11 01:07:46.378737 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.378745 | orchestrator | 2025-09-11 01:07:46.378782 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-11 01:07:46.378790 | orchestrator | Thursday 11 September 2025 01:03:46 +0000 (0:00:01.665) 0:04:28.532 **** 2025-09-11 01:07:46.378810 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-11 01:07:46.378819 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-11 01:07:46.378827 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-11 01:07:46.378840 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:07:46.378848 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-11 01:07:46.378856 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-11 01:07:46.378868 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-11 01:07:46.378882 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-11 01:07:46.378891 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-11 01:07:46.378904 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:07:46.378912 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-11 01:07:46.378920 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:07:46.378928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-11 01:07:46.378936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-11 01:07:46.378944 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.378956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-11 01:07:46.378972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-11 01:07:46.378985 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.378994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-11 01:07:46.379002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-11 01:07:46.379010 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.379018 | orchestrator | 2025-09-11 01:07:46.379025 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-11 01:07:46.379033 | orchestrator | Thursday 11 September 2025 01:03:48 +0000 (0:00:02.088) 0:04:30.620 **** 2025-09-11 01:07:46.379041 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.379049 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.379057 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.379065 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-11 01:07:46.379073 | orchestrator | 2025-09-11 01:07:46.379081 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-09-11 01:07:46.379089 | orchestrator | Thursday 11 September 2025 01:03:49 +0000 (0:00:00.964) 0:04:31.585 **** 2025-09-11 01:07:46.379096 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-11 01:07:46.379104 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-11 01:07:46.379112 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-11 01:07:46.379120 | orchestrator | 2025-09-11 01:07:46.379128 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-09-11 01:07:46.379135 | orchestrator | Thursday 11 September 2025 01:03:50 +0000 (0:00:01.111) 0:04:32.696 **** 2025-09-11 01:07:46.379143 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-11 01:07:46.379151 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-11 01:07:46.379159 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-11 01:07:46.379167 | orchestrator | 2025-09-11 01:07:46.379174 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-09-11 01:07:46.379182 | orchestrator | Thursday 11 September 2025 01:03:51 +0000 (0:00:00.759) 0:04:33.456 **** 2025-09-11 01:07:46.379190 | orchestrator | ok: [testbed-node-3] 2025-09-11 01:07:46.379198 | orchestrator | ok: [testbed-node-4] 2025-09-11 01:07:46.379206 | orchestrator | ok: [testbed-node-5] 2025-09-11 01:07:46.379214 | orchestrator | 2025-09-11 01:07:46.379222 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-09-11 01:07:46.379229 | orchestrator | Thursday 11 September 2025 01:03:51 +0000 (0:00:00.423) 0:04:33.879 **** 2025-09-11 01:07:46.379237 | orchestrator | ok: [testbed-node-3] 2025-09-11 01:07:46.379245 | orchestrator | ok: [testbed-node-4] 2025-09-11 01:07:46.379253 | orchestrator | ok: [testbed-node-5] 2025-09-11 01:07:46.379261 | orchestrator | 2025-09-11 01:07:46.379269 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-09-11 01:07:46.379276 | orchestrator | Thursday 11 September 2025 01:03:52 +0000 (0:00:00.690) 0:04:34.570 **** 2025-09-11 01:07:46.379284 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-11 01:07:46.379297 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-11 01:07:46.379305 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-11 01:07:46.379313 | orchestrator | 2025-09-11 01:07:46.379320 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-09-11 01:07:46.379332 | orchestrator | Thursday 11 September 2025 01:03:53 +0000 (0:00:01.195) 0:04:35.765 **** 2025-09-11 01:07:46.379340 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-11 01:07:46.379348 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-11 01:07:46.379355 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-11 01:07:46.379363 | orchestrator | 2025-09-11 01:07:46.379371 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-09-11 01:07:46.379379 | orchestrator | Thursday 11 September 2025 01:03:54 +0000 (0:00:01.217) 0:04:36.983 **** 2025-09-11 01:07:46.379387 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-11 01:07:46.379395 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-11 01:07:46.379407 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-11 01:07:46.379415 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-09-11 01:07:46.379423 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-09-11 01:07:46.379431 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-09-11 01:07:46.379439 | orchestrator | 2025-09-11 01:07:46.379447 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-09-11 01:07:46.379454 | orchestrator | Thursday 11 September 2025 01:03:58 +0000 (0:00:03.551) 0:04:40.535 **** 2025-09-11 01:07:46.379462 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:07:46.379470 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:07:46.379478 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:07:46.379486 | orchestrator | 2025-09-11 01:07:46.379493 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-09-11 01:07:46.379501 | orchestrator | Thursday 11 September 2025 01:03:58 +0000 (0:00:00.442) 0:04:40.977 **** 2025-09-11 01:07:46.379509 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:07:46.379517 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:07:46.379525 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:07:46.379532 | orchestrator | 2025-09-11 01:07:46.379540 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-09-11 01:07:46.379548 | orchestrator | Thursday 11 September 2025 01:03:59 +0000 (0:00:00.331) 0:04:41.308 **** 2025-09-11 01:07:46.379556 | orchestrator | changed: [testbed-node-3] 2025-09-11 01:07:46.379563 | orchestrator | changed: [testbed-node-4] 2025-09-11 01:07:46.379571 | orchestrator | changed: [testbed-node-5] 2025-09-11 01:07:46.379579 | orchestrator | 2025-09-11 01:07:46.379587 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-09-11 01:07:46.379594 | orchestrator | Thursday 11 September 2025 01:04:00 +0000 (0:00:01.163) 0:04:42.472 **** 2025-09-11 01:07:46.379602 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-11 01:07:46.379611 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-11 01:07:46.379619 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-11 01:07:46.379627 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-11 01:07:46.379635 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-11 01:07:46.379643 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-11 01:07:46.379655 | orchestrator | 2025-09-11 01:07:46.379663 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-09-11 01:07:46.379671 | orchestrator | Thursday 11 September 2025 01:04:03 +0000 (0:00:03.304) 0:04:45.777 **** 2025-09-11 01:07:46.379679 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-11 01:07:46.379687 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-11 01:07:46.379694 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-11 01:07:46.379702 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-11 01:07:46.379710 | orchestrator | changed: [testbed-node-3] 2025-09-11 01:07:46.379718 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-11 01:07:46.379726 | orchestrator | changed: [testbed-node-4] 2025-09-11 01:07:46.379733 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-11 01:07:46.379741 | orchestrator | changed: [testbed-node-5] 2025-09-11 01:07:46.379749 | orchestrator | 2025-09-11 01:07:46.379796 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-09-11 01:07:46.379804 | orchestrator | Thursday 11 September 2025 01:04:07 +0000 (0:00:03.472) 0:04:49.249 **** 2025-09-11 01:07:46.379812 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:07:46.379820 | orchestrator | 2025-09-11 01:07:46.379828 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-09-11 01:07:46.379836 | orchestrator | Thursday 11 September 2025 01:04:07 +0000 (0:00:00.134) 0:04:49.384 **** 2025-09-11 01:07:46.379843 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:07:46.379851 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:07:46.379859 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:07:46.379865 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.379872 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.379879 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.379885 | orchestrator | 2025-09-11 01:07:46.379892 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-09-11 01:07:46.379898 | orchestrator | Thursday 11 September 2025 01:04:07 +0000 (0:00:00.527) 0:04:49.911 **** 2025-09-11 01:07:46.379905 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-11 01:07:46.379912 | orchestrator | 2025-09-11 01:07:46.379922 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-09-11 01:07:46.379929 | orchestrator | Thursday 11 September 2025 01:04:08 +0000 (0:00:00.699) 0:04:50.611 **** 2025-09-11 01:07:46.379935 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:07:46.379942 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:07:46.379949 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:07:46.379955 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.379962 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.379968 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.379975 | orchestrator | 2025-09-11 01:07:46.379982 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-09-11 01:07:46.379988 | orchestrator | Thursday 11 September 2025 01:04:09 +0000 (0:00:00.704) 0:04:51.315 **** 2025-09-11 01:07:46.380001 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-11 01:07:46.380009 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-11 01:07:46.380021 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-11 01:07:46.380029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-11 01:07:46.380039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-11 01:07:46.380051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-11 01:07:46.380058 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-11 01:07:46.380070 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-11 01:07:46.380077 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-11 01:07:46.380084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.380091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.380101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.380114 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.380121 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.380135 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.380142 | orchestrator | 2025-09-11 01:07:46.380149 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-09-11 01:07:46.380155 | orchestrator | Thursday 11 September 2025 01:04:12 +0000 (0:00:03.682) 0:04:54.998 **** 2025-09-11 01:07:46.380163 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-11 01:07:46.380176 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-11 01:07:46.380188 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-11 01:07:46.380201 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-11 01:07:46.380208 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-11 01:07:46.380215 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-11 01:07:46.380222 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.380233 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.380245 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.380256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-11 01:07:46.380263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-11 01:07:46.380270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-11 01:07:46.380277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.380288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.380298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.380310 | orchestrator | 2025-09-11 01:07:46.380317 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-09-11 01:07:46.380324 | orchestrator | Thursday 11 September 2025 01:04:19 +0000 (0:00:07.060) 0:05:02.059 **** 2025-09-11 01:07:46.380330 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:07:46.380337 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:07:46.380344 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:07:46.380350 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.380357 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.380364 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.380370 | orchestrator | 2025-09-11 01:07:46.380377 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-09-11 01:07:46.380383 | orchestrator | Thursday 11 September 2025 01:04:21 +0000 (0:00:01.185) 0:05:03.244 **** 2025-09-11 01:07:46.380390 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-11 01:07:46.380397 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-11 01:07:46.380404 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-11 01:07:46.380410 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-11 01:07:46.380417 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-11 01:07:46.380424 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.380430 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-11 01:07:46.380437 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.380444 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-11 01:07:46.380450 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-11 01:07:46.380457 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-11 01:07:46.380464 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.380470 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-11 01:07:46.380477 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-11 01:07:46.380484 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-11 01:07:46.380490 | orchestrator | 2025-09-11 01:07:46.380497 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-09-11 01:07:46.380504 | orchestrator | Thursday 11 September 2025 01:04:24 +0000 (0:00:03.250) 0:05:06.495 **** 2025-09-11 01:07:46.380510 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:07:46.380517 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:07:46.380523 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:07:46.380530 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.380536 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.380543 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.380549 | orchestrator | 2025-09-11 01:07:46.380556 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-09-11 01:07:46.380563 | orchestrator | Thursday 11 September 2025 01:04:24 +0000 (0:00:00.512) 0:05:07.007 **** 2025-09-11 01:07:46.380569 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-11 01:07:46.380576 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-11 01:07:46.380583 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-11 01:07:46.380594 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-11 01:07:46.380600 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-11 01:07:46.380607 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-11 01:07:46.380614 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-11 01:07:46.380620 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-11 01:07:46.380630 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-11 01:07:46.380637 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.380644 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-11 01:07:46.380650 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-11 01:07:46.380657 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.380664 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-11 01:07:46.380670 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.380681 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-11 01:07:46.380688 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-11 01:07:46.380694 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-11 01:07:46.380701 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-11 01:07:46.380708 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-11 01:07:46.380714 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-11 01:07:46.380721 | orchestrator | 2025-09-11 01:07:46.380727 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-09-11 01:07:46.380734 | orchestrator | Thursday 11 September 2025 01:04:30 +0000 (0:00:05.706) 0:05:12.713 **** 2025-09-11 01:07:46.380741 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-11 01:07:46.380747 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-11 01:07:46.380766 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-11 01:07:46.380773 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-11 01:07:46.380779 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-11 01:07:46.380786 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-11 01:07:46.380793 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-11 01:07:46.380799 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-11 01:07:46.380806 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-11 01:07:46.380812 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-11 01:07:46.380819 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-11 01:07:46.380830 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-11 01:07:46.380836 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-11 01:07:46.380843 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.380849 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-11 01:07:46.380856 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.380863 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-11 01:07:46.380869 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.380876 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-11 01:07:46.380882 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-11 01:07:46.380889 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-11 01:07:46.380896 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-11 01:07:46.380902 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-11 01:07:46.380909 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-11 01:07:46.380915 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-11 01:07:46.380922 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-11 01:07:46.380928 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-11 01:07:46.380935 | orchestrator | 2025-09-11 01:07:46.380941 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-09-11 01:07:46.380948 | orchestrator | Thursday 11 September 2025 01:04:38 +0000 (0:00:08.068) 0:05:20.781 **** 2025-09-11 01:07:46.380955 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:07:46.380961 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:07:46.380968 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:07:46.380974 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.380981 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.380990 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.380997 | orchestrator | 2025-09-11 01:07:46.381004 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-09-11 01:07:46.381011 | orchestrator | Thursday 11 September 2025 01:04:39 +0000 (0:00:00.669) 0:05:21.451 **** 2025-09-11 01:07:46.381017 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:07:46.381024 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:07:46.381030 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:07:46.381037 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.381043 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.381050 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.381056 | orchestrator | 2025-09-11 01:07:46.381063 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-09-11 01:07:46.381129 | orchestrator | Thursday 11 September 2025 01:04:39 +0000 (0:00:00.476) 0:05:21.927 **** 2025-09-11 01:07:46.381138 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.381145 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.381151 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.381158 | orchestrator | changed: [testbed-node-3] 2025-09-11 01:07:46.381165 | orchestrator | changed: [testbed-node-4] 2025-09-11 01:07:46.381171 | orchestrator | changed: [testbed-node-5] 2025-09-11 01:07:46.381178 | orchestrator | 2025-09-11 01:07:46.381185 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-09-11 01:07:46.381191 | orchestrator | Thursday 11 September 2025 01:04:41 +0000 (0:00:01.998) 0:05:23.926 **** 2025-09-11 01:07:46.381199 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-11 01:07:46.381211 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-11 01:07:46.381219 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-11 01:07:46.381226 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:07:46.381236 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-11 01:07:46.381247 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-11 01:07:46.381255 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-11 01:07:46.381267 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:07:46.381274 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-11 01:07:46.381281 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-11 01:07:46.381289 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-11 01:07:46.381295 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:07:46.381306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-11 01:07:46.381318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-11 01:07:46.381343 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.381350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-11 01:07:46.381357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-11 01:07:46.381364 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.381371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-11 01:07:46.381378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-11 01:07:46.381385 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.381392 | orchestrator | 2025-09-11 01:07:46.381399 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-09-11 01:07:46.381405 | orchestrator | Thursday 11 September 2025 01:04:43 +0000 (0:00:01.275) 0:05:25.201 **** 2025-09-11 01:07:46.381412 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-11 01:07:46.381419 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-11 01:07:46.381426 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:07:46.381432 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-11 01:07:46.381439 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-11 01:07:46.381446 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:07:46.381452 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-11 01:07:46.381459 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-11 01:07:46.381466 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:07:46.381472 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-11 01:07:46.381479 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-11 01:07:46.381491 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.381502 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-11 01:07:46.381509 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-11 01:07:46.381516 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.381522 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-11 01:07:46.381529 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-11 01:07:46.381535 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.381542 | orchestrator | 2025-09-11 01:07:46.381549 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-09-11 01:07:46.381555 | orchestrator | Thursday 11 September 2025 01:04:43 +0000 (0:00:00.735) 0:05:25.936 **** 2025-09-11 01:07:46.381566 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-11 01:07:46.381573 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-11 01:07:46.381580 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-11 01:07:46.381588 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-11 01:07:46.381602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-11 01:07:46.381612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-11 01:07:46.381620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-11 01:07:46.381627 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-11 01:07:46.381634 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-11 01:07:46.381641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.381648 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.381662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.381674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.381681 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.381688 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-11 01:07:46.381695 | orchestrator | 2025-09-11 01:07:46.381702 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-11 01:07:46.381708 | orchestrator | Thursday 11 September 2025 01:04:47 +0000 (0:00:03.265) 0:05:29.202 **** 2025-09-11 01:07:46.381715 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:07:46.381722 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:07:46.381729 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:07:46.381735 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.381742 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.381748 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.381769 | orchestrator | 2025-09-11 01:07:46.381776 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-11 01:07:46.381783 | orchestrator | Thursday 11 September 2025 01:04:47 +0000 (0:00:00.714) 0:05:29.916 **** 2025-09-11 01:07:46.381794 | orchestrator | 2025-09-11 01:07:46.381800 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-11 01:07:46.381807 | orchestrator | Thursday 11 September 2025 01:04:47 +0000 (0:00:00.129) 0:05:30.046 **** 2025-09-11 01:07:46.381814 | orchestrator | 2025-09-11 01:07:46.381820 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-11 01:07:46.381827 | orchestrator | Thursday 11 September 2025 01:04:48 +0000 (0:00:00.129) 0:05:30.176 **** 2025-09-11 01:07:46.381834 | orchestrator | 2025-09-11 01:07:46.381840 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-11 01:07:46.381847 | orchestrator | Thursday 11 September 2025 01:04:48 +0000 (0:00:00.132) 0:05:30.309 **** 2025-09-11 01:07:46.381853 | orchestrator | 2025-09-11 01:07:46.381860 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-11 01:07:46.381867 | orchestrator | Thursday 11 September 2025 01:04:48 +0000 (0:00:00.128) 0:05:30.437 **** 2025-09-11 01:07:46.381873 | orchestrator | 2025-09-11 01:07:46.381880 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-11 01:07:46.381886 | orchestrator | Thursday 11 September 2025 01:04:48 +0000 (0:00:00.182) 0:05:30.619 **** 2025-09-11 01:07:46.381893 | orchestrator | 2025-09-11 01:07:46.381899 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-09-11 01:07:46.381909 | orchestrator | Thursday 11 September 2025 01:04:48 +0000 (0:00:00.344) 0:05:30.964 **** 2025-09-11 01:07:46.381916 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:07:46.381923 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:07:46.381929 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:07:46.381936 | orchestrator | 2025-09-11 01:07:46.381942 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-09-11 01:07:46.381949 | orchestrator | Thursday 11 September 2025 01:05:01 +0000 (0:00:12.884) 0:05:43.848 **** 2025-09-11 01:07:46.381956 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:07:46.381962 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:07:46.381969 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:07:46.381976 | orchestrator | 2025-09-11 01:07:46.381982 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-09-11 01:07:46.381992 | orchestrator | Thursday 11 September 2025 01:05:18 +0000 (0:00:16.292) 0:06:00.141 **** 2025-09-11 01:07:46.381999 | orchestrator | changed: [testbed-node-4] 2025-09-11 01:07:46.382005 | orchestrator | changed: [testbed-node-5] 2025-09-11 01:07:46.382012 | orchestrator | changed: [testbed-node-3] 2025-09-11 01:07:46.382041 | orchestrator | 2025-09-11 01:07:46.382048 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-09-11 01:07:46.382054 | orchestrator | Thursday 11 September 2025 01:05:37 +0000 (0:00:19.717) 0:06:19.858 **** 2025-09-11 01:07:46.382061 | orchestrator | changed: [testbed-node-5] 2025-09-11 01:07:46.382068 | orchestrator | changed: [testbed-node-4] 2025-09-11 01:07:46.382074 | orchestrator | changed: [testbed-node-3] 2025-09-11 01:07:46.382081 | orchestrator | 2025-09-11 01:07:46.382087 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-09-11 01:07:46.382094 | orchestrator | Thursday 11 September 2025 01:06:10 +0000 (0:00:32.531) 0:06:52.390 **** 2025-09-11 01:07:46.382101 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2025-09-11 01:07:46.382108 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2025-09-11 01:07:46.382114 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2025-09-11 01:07:46.382121 | orchestrator | changed: [testbed-node-3] 2025-09-11 01:07:46.382127 | orchestrator | changed: [testbed-node-4] 2025-09-11 01:07:46.382134 | orchestrator | changed: [testbed-node-5] 2025-09-11 01:07:46.382141 | orchestrator | 2025-09-11 01:07:46.382147 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-09-11 01:07:46.382158 | orchestrator | Thursday 11 September 2025 01:06:16 +0000 (0:00:06.460) 0:06:58.851 **** 2025-09-11 01:07:46.382165 | orchestrator | changed: [testbed-node-3] 2025-09-11 01:07:46.382172 | orchestrator | changed: [testbed-node-4] 2025-09-11 01:07:46.382179 | orchestrator | changed: [testbed-node-5] 2025-09-11 01:07:46.382185 | orchestrator | 2025-09-11 01:07:46.382192 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-09-11 01:07:46.382198 | orchestrator | Thursday 11 September 2025 01:06:17 +0000 (0:00:00.926) 0:06:59.777 **** 2025-09-11 01:07:46.382205 | orchestrator | changed: [testbed-node-4] 2025-09-11 01:07:46.382212 | orchestrator | changed: [testbed-node-3] 2025-09-11 01:07:46.382218 | orchestrator | changed: [testbed-node-5] 2025-09-11 01:07:46.382225 | orchestrator | 2025-09-11 01:07:46.382231 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-09-11 01:07:46.382238 | orchestrator | Thursday 11 September 2025 01:06:37 +0000 (0:00:19.664) 0:07:19.442 **** 2025-09-11 01:07:46.382245 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:07:46.382251 | orchestrator | 2025-09-11 01:07:46.382258 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-09-11 01:07:46.382265 | orchestrator | Thursday 11 September 2025 01:06:37 +0000 (0:00:00.128) 0:07:19.570 **** 2025-09-11 01:07:46.382271 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:07:46.382278 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.382285 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:07:46.382291 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.382298 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.382305 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-09-11 01:07:46.382311 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-09-11 01:07:46.382318 | orchestrator | 2025-09-11 01:07:46.382324 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-09-11 01:07:46.382331 | orchestrator | Thursday 11 September 2025 01:06:59 +0000 (0:00:22.329) 0:07:41.900 **** 2025-09-11 01:07:46.382338 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:07:46.382344 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:07:46.382351 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:07:46.382357 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.382364 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.382371 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.382377 | orchestrator | 2025-09-11 01:07:46.382384 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-09-11 01:07:46.382390 | orchestrator | Thursday 11 September 2025 01:07:07 +0000 (0:00:07.996) 0:07:49.897 **** 2025-09-11 01:07:46.382397 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.382404 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:07:46.382410 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:07:46.382417 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.382424 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.382430 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2025-09-11 01:07:46.382437 | orchestrator | 2025-09-11 01:07:46.382443 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-11 01:07:46.382450 | orchestrator | Thursday 11 September 2025 01:07:11 +0000 (0:00:03.635) 0:07:53.532 **** 2025-09-11 01:07:46.382457 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-09-11 01:07:46.382463 | orchestrator | 2025-09-11 01:07:46.382470 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-11 01:07:46.382480 | orchestrator | Thursday 11 September 2025 01:07:24 +0000 (0:00:12.663) 0:08:06.196 **** 2025-09-11 01:07:46.382486 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-09-11 01:07:46.382493 | orchestrator | 2025-09-11 01:07:46.382500 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-09-11 01:07:46.382513 | orchestrator | Thursday 11 September 2025 01:07:25 +0000 (0:00:01.056) 0:08:07.252 **** 2025-09-11 01:07:46.382519 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:07:46.382526 | orchestrator | 2025-09-11 01:07:46.382533 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-09-11 01:07:46.382539 | orchestrator | Thursday 11 September 2025 01:07:26 +0000 (0:00:01.220) 0:08:08.473 **** 2025-09-11 01:07:46.382546 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-09-11 01:07:46.382552 | orchestrator | 2025-09-11 01:07:46.382562 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-09-11 01:07:46.382569 | orchestrator | Thursday 11 September 2025 01:07:38 +0000 (0:00:11.783) 0:08:20.256 **** 2025-09-11 01:07:46.382576 | orchestrator | ok: [testbed-node-3] 2025-09-11 01:07:46.382582 | orchestrator | ok: [testbed-node-4] 2025-09-11 01:07:46.382589 | orchestrator | ok: [testbed-node-5] 2025-09-11 01:07:46.382596 | orchestrator | ok: [testbed-node-0] 2025-09-11 01:07:46.382602 | orchestrator | ok: [testbed-node-1] 2025-09-11 01:07:46.382609 | orchestrator | ok: [testbed-node-2] 2025-09-11 01:07:46.382615 | orchestrator | 2025-09-11 01:07:46.382622 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-09-11 01:07:46.382628 | orchestrator | 2025-09-11 01:07:46.382635 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-09-11 01:07:46.382642 | orchestrator | Thursday 11 September 2025 01:07:39 +0000 (0:00:01.748) 0:08:22.005 **** 2025-09-11 01:07:46.382648 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:07:46.382655 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:07:46.382661 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:07:46.382668 | orchestrator | 2025-09-11 01:07:46.382675 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-09-11 01:07:46.382681 | orchestrator | 2025-09-11 01:07:46.382688 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-09-11 01:07:46.382694 | orchestrator | Thursday 11 September 2025 01:07:41 +0000 (0:00:01.139) 0:08:23.145 **** 2025-09-11 01:07:46.382701 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.382708 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.382714 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.382721 | orchestrator | 2025-09-11 01:07:46.382727 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-09-11 01:07:46.382734 | orchestrator | 2025-09-11 01:07:46.382741 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-09-11 01:07:46.382747 | orchestrator | Thursday 11 September 2025 01:07:41 +0000 (0:00:00.491) 0:08:23.636 **** 2025-09-11 01:07:46.382766 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-09-11 01:07:46.382772 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-11 01:07:46.382779 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-11 01:07:46.382786 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-09-11 01:07:46.382792 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-09-11 01:07:46.382799 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-09-11 01:07:46.382806 | orchestrator | skipping: [testbed-node-3] 2025-09-11 01:07:46.382812 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-09-11 01:07:46.382819 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-11 01:07:46.382826 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-11 01:07:46.382832 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-09-11 01:07:46.382839 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-09-11 01:07:46.382846 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-09-11 01:07:46.382852 | orchestrator | skipping: [testbed-node-4] 2025-09-11 01:07:46.382859 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-09-11 01:07:46.382870 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-11 01:07:46.382877 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-11 01:07:46.382884 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-09-11 01:07:46.382890 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-09-11 01:07:46.382897 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-09-11 01:07:46.382904 | orchestrator | skipping: [testbed-node-5] 2025-09-11 01:07:46.382910 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-09-11 01:07:46.382917 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-11 01:07:46.382923 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-11 01:07:46.382930 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-09-11 01:07:46.382937 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-09-11 01:07:46.382943 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-09-11 01:07:46.382950 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.382957 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-09-11 01:07:46.382963 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-11 01:07:46.382970 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-11 01:07:46.382976 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-09-11 01:07:46.382983 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-09-11 01:07:46.382990 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-09-11 01:07:46.382996 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.383003 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-09-11 01:07:46.383013 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-11 01:07:46.383020 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-11 01:07:46.383027 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-09-11 01:07:46.383033 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-09-11 01:07:46.383040 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-09-11 01:07:46.383046 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.383053 | orchestrator | 2025-09-11 01:07:46.383060 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-09-11 01:07:46.383066 | orchestrator | 2025-09-11 01:07:46.383073 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-09-11 01:07:46.383079 | orchestrator | Thursday 11 September 2025 01:07:42 +0000 (0:00:01.140) 0:08:24.777 **** 2025-09-11 01:07:46.383089 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-09-11 01:07:46.383097 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-09-11 01:07:46.383103 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.383110 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-09-11 01:07:46.383117 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-09-11 01:07:46.383123 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.383130 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-09-11 01:07:46.383136 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-09-11 01:07:46.383143 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.383150 | orchestrator | 2025-09-11 01:07:46.383156 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-09-11 01:07:46.383163 | orchestrator | 2025-09-11 01:07:46.383170 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-09-11 01:07:46.383176 | orchestrator | Thursday 11 September 2025 01:07:43 +0000 (0:00:00.656) 0:08:25.434 **** 2025-09-11 01:07:46.383183 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.383189 | orchestrator | 2025-09-11 01:07:46.383200 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-09-11 01:07:46.383207 | orchestrator | 2025-09-11 01:07:46.383213 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-09-11 01:07:46.383220 | orchestrator | Thursday 11 September 2025 01:07:43 +0000 (0:00:00.635) 0:08:26.069 **** 2025-09-11 01:07:46.383227 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:07:46.383233 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:07:46.383240 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:07:46.383246 | orchestrator | 2025-09-11 01:07:46.383253 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 01:07:46.383260 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-11 01:07:46.383267 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-09-11 01:07:46.383274 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-11 01:07:46.383281 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-11 01:07:46.383287 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-11 01:07:46.383294 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-09-11 01:07:46.383300 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-09-11 01:07:46.383307 | orchestrator | 2025-09-11 01:07:46.383314 | orchestrator | 2025-09-11 01:07:46.383320 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 01:07:46.383327 | orchestrator | Thursday 11 September 2025 01:07:44 +0000 (0:00:00.389) 0:08:26.459 **** 2025-09-11 01:07:46.383334 | orchestrator | =============================================================================== 2025-09-11 01:07:46.383340 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 32.53s 2025-09-11 01:07:46.383347 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 31.69s 2025-09-11 01:07:46.383354 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.33s 2025-09-11 01:07:46.383360 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 19.72s 2025-09-11 01:07:46.383367 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 19.66s 2025-09-11 01:07:46.383373 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 19.31s 2025-09-11 01:07:46.383380 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 18.18s 2025-09-11 01:07:46.383387 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.78s 2025-09-11 01:07:46.383393 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 16.29s 2025-09-11 01:07:46.383400 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 13.95s 2025-09-11 01:07:46.383406 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.61s 2025-09-11 01:07:46.383416 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.88s 2025-09-11 01:07:46.383423 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.72s 2025-09-11 01:07:46.383430 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.66s 2025-09-11 01:07:46.383436 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.19s 2025-09-11 01:07:46.383443 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.78s 2025-09-11 01:07:46.383456 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.92s 2025-09-11 01:07:46.383463 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 8.39s 2025-09-11 01:07:46.383469 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 8.12s 2025-09-11 01:07:46.383479 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 8.07s 2025-09-11 01:07:46.383486 | orchestrator | 2025-09-11 01:07:46 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:07:49.408092 | orchestrator | 2025-09-11 01:07:49 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:07:49.409919 | orchestrator | 2025-09-11 01:07:49 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:07:49.411728 | orchestrator | 2025-09-11 01:07:49 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:07:52.454440 | orchestrator | 2025-09-11 01:07:52 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:07:52.457135 | orchestrator | 2025-09-11 01:07:52 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:07:52.457230 | orchestrator | 2025-09-11 01:07:52 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:07:55.496508 | orchestrator | 2025-09-11 01:07:55 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:07:55.497872 | orchestrator | 2025-09-11 01:07:55 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:07:55.497907 | orchestrator | 2025-09-11 01:07:55 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:07:58.545088 | orchestrator | 2025-09-11 01:07:58 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:07:58.546951 | orchestrator | 2025-09-11 01:07:58 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:07:58.547000 | orchestrator | 2025-09-11 01:07:58 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:08:01.589811 | orchestrator | 2025-09-11 01:08:01 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:08:01.591075 | orchestrator | 2025-09-11 01:08:01 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:08:01.591105 | orchestrator | 2025-09-11 01:08:01 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:08:04.635907 | orchestrator | 2025-09-11 01:08:04 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:08:04.636850 | orchestrator | 2025-09-11 01:08:04 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:08:04.636892 | orchestrator | 2025-09-11 01:08:04 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:08:07.676026 | orchestrator | 2025-09-11 01:08:07 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:08:07.677527 | orchestrator | 2025-09-11 01:08:07 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:08:07.677831 | orchestrator | 2025-09-11 01:08:07 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:08:10.719066 | orchestrator | 2025-09-11 01:08:10 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:08:10.719948 | orchestrator | 2025-09-11 01:08:10 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:08:10.719980 | orchestrator | 2025-09-11 01:08:10 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:08:13.759575 | orchestrator | 2025-09-11 01:08:13 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:08:13.760416 | orchestrator | 2025-09-11 01:08:13 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:08:13.760469 | orchestrator | 2025-09-11 01:08:13 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:08:16.801999 | orchestrator | 2025-09-11 01:08:16 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:08:16.803912 | orchestrator | 2025-09-11 01:08:16 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:08:16.803957 | orchestrator | 2025-09-11 01:08:16 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:08:19.857482 | orchestrator | 2025-09-11 01:08:19 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:08:19.859533 | orchestrator | 2025-09-11 01:08:19 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:08:19.859934 | orchestrator | 2025-09-11 01:08:19 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:08:22.905170 | orchestrator | 2025-09-11 01:08:22 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:08:22.906846 | orchestrator | 2025-09-11 01:08:22 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:08:22.906889 | orchestrator | 2025-09-11 01:08:22 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:08:25.949880 | orchestrator | 2025-09-11 01:08:25 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:08:25.951095 | orchestrator | 2025-09-11 01:08:25 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:08:25.951144 | orchestrator | 2025-09-11 01:08:25 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:08:28.992805 | orchestrator | 2025-09-11 01:08:28 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:08:28.993558 | orchestrator | 2025-09-11 01:08:28 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:08:28.994091 | orchestrator | 2025-09-11 01:08:28 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:08:32.038990 | orchestrator | 2025-09-11 01:08:32 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:08:32.039261 | orchestrator | 2025-09-11 01:08:32 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:08:32.039283 | orchestrator | 2025-09-11 01:08:32 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:08:35.081781 | orchestrator | 2025-09-11 01:08:35 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:08:35.082570 | orchestrator | 2025-09-11 01:08:35 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:08:35.082618 | orchestrator | 2025-09-11 01:08:35 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:08:38.120535 | orchestrator | 2025-09-11 01:08:38 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:08:38.121234 | orchestrator | 2025-09-11 01:08:38 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:08:38.121500 | orchestrator | 2025-09-11 01:08:38 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:08:41.166224 | orchestrator | 2025-09-11 01:08:41 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state STARTED 2025-09-11 01:08:41.167701 | orchestrator | 2025-09-11 01:08:41 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:08:41.167732 | orchestrator | 2025-09-11 01:08:41 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:08:44.216898 | orchestrator | 2025-09-11 01:08:44 | INFO  | Task b0c8ad1c-1eaf-4252-b514-faef9e5dc7a8 is in state SUCCESS 2025-09-11 01:08:44.218093 | orchestrator | 2025-09-11 01:08:44.218132 | orchestrator | 2025-09-11 01:08:44.218145 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-11 01:08:44.218158 | orchestrator | 2025-09-11 01:08:44.218209 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-11 01:08:44.218225 | orchestrator | Thursday 11 September 2025 01:06:28 +0000 (0:00:00.193) 0:00:00.193 **** 2025-09-11 01:08:44.218236 | orchestrator | ok: [testbed-node-0] 2025-09-11 01:08:44.218248 | orchestrator | ok: [testbed-node-1] 2025-09-11 01:08:44.218259 | orchestrator | ok: [testbed-node-2] 2025-09-11 01:08:44.218269 | orchestrator | 2025-09-11 01:08:44.218281 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-11 01:08:44.218292 | orchestrator | Thursday 11 September 2025 01:06:28 +0000 (0:00:00.227) 0:00:00.420 **** 2025-09-11 01:08:44.218303 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-09-11 01:08:44.218314 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-09-11 01:08:44.218325 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-09-11 01:08:44.218336 | orchestrator | 2025-09-11 01:08:44.218346 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-09-11 01:08:44.218357 | orchestrator | 2025-09-11 01:08:44.218367 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-11 01:08:44.218436 | orchestrator | Thursday 11 September 2025 01:06:29 +0000 (0:00:00.272) 0:00:00.693 **** 2025-09-11 01:08:44.218447 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 01:08:44.218459 | orchestrator | 2025-09-11 01:08:44.218470 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-09-11 01:08:44.218481 | orchestrator | Thursday 11 September 2025 01:06:29 +0000 (0:00:00.388) 0:00:01.081 **** 2025-09-11 01:08:44.218512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-11 01:08:44.218529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-11 01:08:44.218541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-11 01:08:44.218574 | orchestrator | 2025-09-11 01:08:44.218585 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-09-11 01:08:44.218597 | orchestrator | Thursday 11 September 2025 01:06:30 +0000 (0:00:00.772) 0:00:01.854 **** 2025-09-11 01:08:44.218763 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-09-11 01:08:44.218777 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-09-11 01:08:44.218792 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-11 01:08:44.218805 | orchestrator | 2025-09-11 01:08:44.218817 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-11 01:08:44.218831 | orchestrator | Thursday 11 September 2025 01:06:31 +0000 (0:00:00.700) 0:00:02.555 **** 2025-09-11 01:08:44.218843 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 01:08:44.218856 | orchestrator | 2025-09-11 01:08:44.218868 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-09-11 01:08:44.218882 | orchestrator | Thursday 11 September 2025 01:06:31 +0000 (0:00:00.550) 0:00:03.105 **** 2025-09-11 01:08:44.218910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-11 01:08:44.218925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-11 01:08:44.218946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-11 01:08:44.218959 | orchestrator | 2025-09-11 01:08:44.218972 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-09-11 01:08:44.218985 | orchestrator | Thursday 11 September 2025 01:06:32 +0000 (0:00:01.268) 0:00:04.373 **** 2025-09-11 01:08:44.218998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-11 01:08:44.219018 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:08:44.219030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-11 01:08:44.219041 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:08:44.219059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-11 01:08:44.219071 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:08:44.219081 | orchestrator | 2025-09-11 01:08:44.219092 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-09-11 01:08:44.219103 | orchestrator | Thursday 11 September 2025 01:06:33 +0000 (0:00:00.363) 0:00:04.736 **** 2025-09-11 01:08:44.219114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-11 01:08:44.219125 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:08:44.219142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-11 01:08:44.219153 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:08:44.219164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-11 01:08:44.219184 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:08:44.219195 | orchestrator | 2025-09-11 01:08:44.219205 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-09-11 01:08:44.219216 | orchestrator | Thursday 11 September 2025 01:06:34 +0000 (0:00:00.884) 0:00:05.621 **** 2025-09-11 01:08:44.219227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-11 01:08:44.219239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-11 01:08:44.219257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-11 01:08:44.219268 | orchestrator | 2025-09-11 01:08:44.219279 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-09-11 01:08:44.219290 | orchestrator | Thursday 11 September 2025 01:06:35 +0000 (0:00:01.198) 0:00:06.819 **** 2025-09-11 01:08:44.219306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-11 01:08:44.219318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-11 01:08:44.219349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-11 01:08:44.219362 | orchestrator | 2025-09-11 01:08:44.219373 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-09-11 01:08:44.219384 | orchestrator | Thursday 11 September 2025 01:06:36 +0000 (0:00:01.342) 0:00:08.162 **** 2025-09-11 01:08:44.219394 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:08:44.219405 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:08:44.219416 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:08:44.219426 | orchestrator | 2025-09-11 01:08:44.219437 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-09-11 01:08:44.219448 | orchestrator | Thursday 11 September 2025 01:06:37 +0000 (0:00:00.532) 0:00:08.695 **** 2025-09-11 01:08:44.219459 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-11 01:08:44.219470 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-11 01:08:44.219480 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-11 01:08:44.219491 | orchestrator | 2025-09-11 01:08:44.219502 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-09-11 01:08:44.219512 | orchestrator | Thursday 11 September 2025 01:06:38 +0000 (0:00:01.386) 0:00:10.082 **** 2025-09-11 01:08:44.219523 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-11 01:08:44.219534 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-11 01:08:44.219545 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-11 01:08:44.219556 | orchestrator | 2025-09-11 01:08:44.219567 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-09-11 01:08:44.219577 | orchestrator | Thursday 11 September 2025 01:06:40 +0000 (0:00:01.930) 0:00:12.012 **** 2025-09-11 01:08:44.219594 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-11 01:08:44.219628 | orchestrator | 2025-09-11 01:08:44.219639 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-09-11 01:08:44.219650 | orchestrator | Thursday 11 September 2025 01:06:41 +0000 (0:00:01.051) 0:00:13.064 **** 2025-09-11 01:08:44.219661 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-09-11 01:08:44.219672 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-09-11 01:08:44.219682 | orchestrator | ok: [testbed-node-0] 2025-09-11 01:08:44.219693 | orchestrator | ok: [testbed-node-1] 2025-09-11 01:08:44.219704 | orchestrator | ok: [testbed-node-2] 2025-09-11 01:08:44.219715 | orchestrator | 2025-09-11 01:08:44.219726 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-09-11 01:08:44.219736 | orchestrator | Thursday 11 September 2025 01:06:42 +0000 (0:00:00.824) 0:00:13.888 **** 2025-09-11 01:08:44.219747 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:08:44.219758 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:08:44.219769 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:08:44.219786 | orchestrator | 2025-09-11 01:08:44.219797 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-09-11 01:08:44.219808 | orchestrator | Thursday 11 September 2025 01:06:42 +0000 (0:00:00.390) 0:00:14.279 **** 2025-09-11 01:08:44.219825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1100214, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0091476, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.219838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1100214, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0091476, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.219850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1100214, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0091476, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.219862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1100283, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0227017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.219879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1100283, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0227017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.219891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1100283, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0227017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.219913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1100239, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0126781, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.219924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1100239, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0126781, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.219936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1100239, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0126781, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.219947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1100285, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0240982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.219958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1100285, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0240982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.219976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1100285, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0240982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.219994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1100257, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.015573, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.220010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1100257, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.015573, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.220022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1100257, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.015573, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.220033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1100280, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0206783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.220044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1100280, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0206783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.220062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1100280, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0206783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.220088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1100212, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0042024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.220106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1100212, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0042024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.220117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1100212, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0042024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.220129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1100229, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0094624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.220140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1100229, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0094624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.220151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1100229, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0094624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.220788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1100241, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0126781, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.220825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1100241, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0126781, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.220843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1100241, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0126781, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.220855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1100264, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0169866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.220867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1100264, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0169866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.220878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1100264, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0169866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.220925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1100282, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0221922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.220946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1100282, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0221922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.220963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1100282, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0221922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.220975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1100234, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.010678, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.220986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1100234, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.010678, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.220998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1100234, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.010678, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1100279, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0206783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1100279, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0206783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1100279, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0206783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1100259, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0158465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1100259, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0158465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1100259, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0158465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1100249, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0151489, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1100249, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0151489, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1100249, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0151489, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1100247, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0145042, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1100247, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0145042, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1100247, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0145042, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1100267, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.020593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1100267, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.020593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1100267, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.020593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1100242, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.013678, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1100242, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.013678, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1100242, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.013678, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1100281, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0216782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1100281, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0216782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1100281, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0216782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1101076, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3086827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1101076, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3086827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1101076, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3086827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1100337, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0369382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1100337, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0369382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1100337, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0369382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1100302, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0275836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1100302, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0275836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1100302, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0275836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1100365, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0391896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1100365, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0391896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1100365, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0391896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1100294, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0246782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1100294, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0246782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1100294, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0246782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1100958, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.2470558, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1100958, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.2470558, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1100958, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.2470558, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1100369, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.2406816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1100369, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.2406816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1100369, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.2406816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1100962, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.2476785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1100962, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.2476785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1100962, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.2476785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1101058, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3018343, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.221993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1101058, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3018343, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.222005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1101058, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.3018343, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.222062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1100953, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.2453413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.222085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1100953, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.2453413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.222097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1100953, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.2453413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.222116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1100359, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0384839, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.222128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1100359, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0384839, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.222144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1100359, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0384839, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.222156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1100317, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0306783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.222175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1100317, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0306783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.222187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1100317, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0306783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.222204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1100353, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0376194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.222215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1100353, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0376194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.222230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1100353, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0376194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.222240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1100305, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0290353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.222256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1100305, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0290353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.222266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1100305, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0290353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.222282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1100364, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0384839, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.222293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1100364, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0384839, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.222308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1100364, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0384839, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.222318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1101010, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.2986825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.222334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1101010, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.2986825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.222344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1101010, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.2986825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.222360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1100971, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.2696822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.222371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1100971, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.2696822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.222385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1100971, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.2696822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.222395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1100297, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0256784, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.222415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1100297, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0256784, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.222425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1100297, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0256784, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.222440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1100299, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0266783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.222450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1100299, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0266783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.222465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1100299, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.0266783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.222475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1100949, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.2438536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.222494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1100949, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.2438536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.222504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1100949, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.2438536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.222514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1100968, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.247917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.222530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1100968, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.247917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.222540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1100968, 'dev': 115, 'nlink': 1, 'atime': 1757548928.0, 'mtime': 1757548928.0, 'ctime': 1757549852.247917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-11 01:08:44.222556 | orchestrator | 2025-09-11 01:08:44.222571 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-09-11 01:08:44.222582 | orchestrator | Thursday 11 September 2025 01:07:20 +0000 (0:00:37.302) 0:00:51.582 **** 2025-09-11 01:08:44.222593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-11 01:08:44.222625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-11 01:08:44.222635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-11 01:08:44.222645 | orchestrator | 2025-09-11 01:08:44.222655 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-09-11 01:08:44.222664 | orchestrator | Thursday 11 September 2025 01:07:21 +0000 (0:00:00.994) 0:00:52.576 **** 2025-09-11 01:08:44.222674 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:08:44.222684 | orchestrator | 2025-09-11 01:08:44.222694 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-09-11 01:08:44.222703 | orchestrator | Thursday 11 September 2025 01:07:23 +0000 (0:00:02.402) 0:00:54.979 **** 2025-09-11 01:08:44.222713 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:08:44.222722 | orchestrator | 2025-09-11 01:08:44.222731 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-11 01:08:44.222741 | orchestrator | Thursday 11 September 2025 01:07:26 +0000 (0:00:02.541) 0:00:57.521 **** 2025-09-11 01:08:44.222750 | orchestrator | 2025-09-11 01:08:44.222760 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-11 01:08:44.222775 | orchestrator | Thursday 11 September 2025 01:07:26 +0000 (0:00:00.069) 0:00:57.590 **** 2025-09-11 01:08:44.222785 | orchestrator | 2025-09-11 01:08:44.222794 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-11 01:08:44.222804 | orchestrator | Thursday 11 September 2025 01:07:26 +0000 (0:00:00.064) 0:00:57.655 **** 2025-09-11 01:08:44.222813 | orchestrator | 2025-09-11 01:08:44.222823 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-09-11 01:08:44.222833 | orchestrator | Thursday 11 September 2025 01:07:26 +0000 (0:00:00.206) 0:00:57.861 **** 2025-09-11 01:08:44.222842 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:08:44.222852 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:08:44.222868 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:08:44.222877 | orchestrator | 2025-09-11 01:08:44.222887 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-09-11 01:08:44.222896 | orchestrator | Thursday 11 September 2025 01:07:28 +0000 (0:00:01.934) 0:00:59.795 **** 2025-09-11 01:08:44.222906 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:08:44.222916 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:08:44.222925 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-09-11 01:08:44.222936 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-09-11 01:08:44.222946 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-09-11 01:08:44.222955 | orchestrator | ok: [testbed-node-0] 2025-09-11 01:08:44.222965 | orchestrator | 2025-09-11 01:08:44.222974 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-09-11 01:08:44.222984 | orchestrator | Thursday 11 September 2025 01:08:07 +0000 (0:00:38.936) 0:01:38.731 **** 2025-09-11 01:08:44.222998 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:08:44.223008 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:08:44.223018 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:08:44.223027 | orchestrator | 2025-09-11 01:08:44.223037 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-09-11 01:08:44.223047 | orchestrator | Thursday 11 September 2025 01:08:35 +0000 (0:00:28.481) 0:02:07.213 **** 2025-09-11 01:08:44.223056 | orchestrator | ok: [testbed-node-0] 2025-09-11 01:08:44.223066 | orchestrator | 2025-09-11 01:08:44.223076 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-09-11 01:08:44.223085 | orchestrator | Thursday 11 September 2025 01:08:38 +0000 (0:00:02.350) 0:02:09.563 **** 2025-09-11 01:08:44.223095 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:08:44.223104 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:08:44.223114 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:08:44.223123 | orchestrator | 2025-09-11 01:08:44.223133 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-09-11 01:08:44.223142 | orchestrator | Thursday 11 September 2025 01:08:38 +0000 (0:00:00.455) 0:02:10.019 **** 2025-09-11 01:08:44.223153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-09-11 01:08:44.223165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-09-11 01:08:44.223175 | orchestrator | 2025-09-11 01:08:44.223185 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-09-11 01:08:44.223195 | orchestrator | Thursday 11 September 2025 01:08:41 +0000 (0:00:02.544) 0:02:12.563 **** 2025-09-11 01:08:44.223204 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:08:44.223214 | orchestrator | 2025-09-11 01:08:44.223223 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 01:08:44.223233 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-11 01:08:44.223243 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-11 01:08:44.223253 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-11 01:08:44.223268 | orchestrator | 2025-09-11 01:08:44.223278 | orchestrator | 2025-09-11 01:08:44.223287 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 01:08:44.223297 | orchestrator | Thursday 11 September 2025 01:08:41 +0000 (0:00:00.252) 0:02:12.815 **** 2025-09-11 01:08:44.223306 | orchestrator | =============================================================================== 2025-09-11 01:08:44.223316 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.94s 2025-09-11 01:08:44.223325 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.30s 2025-09-11 01:08:44.223335 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 28.48s 2025-09-11 01:08:44.223344 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.54s 2025-09-11 01:08:44.223354 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.54s 2025-09-11 01:08:44.223368 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.40s 2025-09-11 01:08:44.223378 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.35s 2025-09-11 01:08:44.223388 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.93s 2025-09-11 01:08:44.223397 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.93s 2025-09-11 01:08:44.223407 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.39s 2025-09-11 01:08:44.223417 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.34s 2025-09-11 01:08:44.223426 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.27s 2025-09-11 01:08:44.223435 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.20s 2025-09-11 01:08:44.223445 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 1.05s 2025-09-11 01:08:44.223454 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.99s 2025-09-11 01:08:44.223464 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.88s 2025-09-11 01:08:44.223473 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.82s 2025-09-11 01:08:44.223483 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.77s 2025-09-11 01:08:44.223492 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.70s 2025-09-11 01:08:44.223502 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.55s 2025-09-11 01:08:44.223515 | orchestrator | 2025-09-11 01:08:44 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:08:44.223525 | orchestrator | 2025-09-11 01:08:44 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:08:47.258824 | orchestrator | 2025-09-11 01:08:47 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:08:47.258921 | orchestrator | 2025-09-11 01:08:47 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:08:50.301523 | orchestrator | 2025-09-11 01:08:50 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:08:50.301638 | orchestrator | 2025-09-11 01:08:50 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:08:53.345003 | orchestrator | 2025-09-11 01:08:53 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:08:53.345099 | orchestrator | 2025-09-11 01:08:53 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:08:56.384245 | orchestrator | 2025-09-11 01:08:56 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:08:56.384375 | orchestrator | 2025-09-11 01:08:56 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:08:59.425861 | orchestrator | 2025-09-11 01:08:59 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:08:59.425999 | orchestrator | 2025-09-11 01:08:59 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:09:02.467099 | orchestrator | 2025-09-11 01:09:02 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:09:02.467196 | orchestrator | 2025-09-11 01:09:02 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:09:05.508581 | orchestrator | 2025-09-11 01:09:05 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:09:05.508681 | orchestrator | 2025-09-11 01:09:05 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:09:08.553116 | orchestrator | 2025-09-11 01:09:08 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:09:08.553209 | orchestrator | 2025-09-11 01:09:08 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:09:11.593650 | orchestrator | 2025-09-11 01:09:11 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:09:11.593748 | orchestrator | 2025-09-11 01:09:11 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:09:14.632347 | orchestrator | 2025-09-11 01:09:14 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:09:14.632449 | orchestrator | 2025-09-11 01:09:14 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:09:17.675565 | orchestrator | 2025-09-11 01:09:17 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:09:17.675649 | orchestrator | 2025-09-11 01:09:17 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:09:20.724328 | orchestrator | 2025-09-11 01:09:20 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:09:20.724427 | orchestrator | 2025-09-11 01:09:20 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:09:23.757992 | orchestrator | 2025-09-11 01:09:23 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:09:23.758363 | orchestrator | 2025-09-11 01:09:23 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:09:26.801233 | orchestrator | 2025-09-11 01:09:26 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:09:26.801407 | orchestrator | 2025-09-11 01:09:26 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:09:29.843802 | orchestrator | 2025-09-11 01:09:29 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:09:29.843899 | orchestrator | 2025-09-11 01:09:29 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:09:32.887393 | orchestrator | 2025-09-11 01:09:32 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:09:32.887551 | orchestrator | 2025-09-11 01:09:32 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:09:35.926258 | orchestrator | 2025-09-11 01:09:35 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:09:35.926339 | orchestrator | 2025-09-11 01:09:35 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:09:38.968125 | orchestrator | 2025-09-11 01:09:38 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:09:38.968225 | orchestrator | 2025-09-11 01:09:38 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:09:42.012824 | orchestrator | 2025-09-11 01:09:42 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:09:42.012923 | orchestrator | 2025-09-11 01:09:42 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:09:45.060837 | orchestrator | 2025-09-11 01:09:45 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:09:45.060975 | orchestrator | 2025-09-11 01:09:45 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:09:48.098654 | orchestrator | 2025-09-11 01:09:48 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:09:48.098748 | orchestrator | 2025-09-11 01:09:48 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:09:51.138775 | orchestrator | 2025-09-11 01:09:51 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:09:51.138872 | orchestrator | 2025-09-11 01:09:51 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:09:54.186444 | orchestrator | 2025-09-11 01:09:54 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:09:54.186545 | orchestrator | 2025-09-11 01:09:54 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:09:57.224187 | orchestrator | 2025-09-11 01:09:57 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:09:57.224264 | orchestrator | 2025-09-11 01:09:57 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:10:00.269789 | orchestrator | 2025-09-11 01:10:00 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:10:00.269920 | orchestrator | 2025-09-11 01:10:00 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:10:03.317287 | orchestrator | 2025-09-11 01:10:03 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:10:03.317388 | orchestrator | 2025-09-11 01:10:03 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:10:06.362594 | orchestrator | 2025-09-11 01:10:06 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:10:06.362828 | orchestrator | 2025-09-11 01:10:06 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:10:09.417631 | orchestrator | 2025-09-11 01:10:09 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:10:09.417730 | orchestrator | 2025-09-11 01:10:09 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:10:12.465658 | orchestrator | 2025-09-11 01:10:12 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:10:12.465767 | orchestrator | 2025-09-11 01:10:12 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:10:15.513309 | orchestrator | 2025-09-11 01:10:15 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:10:15.513460 | orchestrator | 2025-09-11 01:10:15 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:10:18.546439 | orchestrator | 2025-09-11 01:10:18 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:10:18.546519 | orchestrator | 2025-09-11 01:10:18 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:10:21.576228 | orchestrator | 2025-09-11 01:10:21 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:10:21.576311 | orchestrator | 2025-09-11 01:10:21 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:10:24.621565 | orchestrator | 2025-09-11 01:10:24 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:10:24.621676 | orchestrator | 2025-09-11 01:10:24 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:10:27.655299 | orchestrator | 2025-09-11 01:10:27 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:10:27.655470 | orchestrator | 2025-09-11 01:10:27 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:10:30.701396 | orchestrator | 2025-09-11 01:10:30 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:10:30.701527 | orchestrator | 2025-09-11 01:10:30 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:10:33.742480 | orchestrator | 2025-09-11 01:10:33 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:10:33.742577 | orchestrator | 2025-09-11 01:10:33 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:10:36.790902 | orchestrator | 2025-09-11 01:10:36 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:10:36.790997 | orchestrator | 2025-09-11 01:10:36 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:10:39.835585 | orchestrator | 2025-09-11 01:10:39 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:10:39.835668 | orchestrator | 2025-09-11 01:10:39 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:10:42.875198 | orchestrator | 2025-09-11 01:10:42 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:10:42.875301 | orchestrator | 2025-09-11 01:10:42 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:10:45.925562 | orchestrator | 2025-09-11 01:10:45 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:10:45.925664 | orchestrator | 2025-09-11 01:10:45 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:10:48.967079 | orchestrator | 2025-09-11 01:10:48 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:10:48.967155 | orchestrator | 2025-09-11 01:10:48 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:10:52.008291 | orchestrator | 2025-09-11 01:10:52 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:10:52.008448 | orchestrator | 2025-09-11 01:10:52 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:10:55.046890 | orchestrator | 2025-09-11 01:10:55 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:10:55.046993 | orchestrator | 2025-09-11 01:10:55 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:10:58.096520 | orchestrator | 2025-09-11 01:10:58 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:10:58.096628 | orchestrator | 2025-09-11 01:10:58 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:11:01.135417 | orchestrator | 2025-09-11 01:11:01 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:11:01.135517 | orchestrator | 2025-09-11 01:11:01 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:11:04.177121 | orchestrator | 2025-09-11 01:11:04 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:11:04.177238 | orchestrator | 2025-09-11 01:11:04 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:11:07.227241 | orchestrator | 2025-09-11 01:11:07 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:11:07.227378 | orchestrator | 2025-09-11 01:11:07 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:11:10.269536 | orchestrator | 2025-09-11 01:11:10 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:11:10.269619 | orchestrator | 2025-09-11 01:11:10 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:11:13.313643 | orchestrator | 2025-09-11 01:11:13 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:11:13.313745 | orchestrator | 2025-09-11 01:11:13 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:11:16.364254 | orchestrator | 2025-09-11 01:11:16 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:11:16.364433 | orchestrator | 2025-09-11 01:11:16 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:11:19.409355 | orchestrator | 2025-09-11 01:11:19 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:11:19.409456 | orchestrator | 2025-09-11 01:11:19 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:11:22.447425 | orchestrator | 2025-09-11 01:11:22 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:11:22.447527 | orchestrator | 2025-09-11 01:11:22 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:11:25.490095 | orchestrator | 2025-09-11 01:11:25 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:11:25.490200 | orchestrator | 2025-09-11 01:11:25 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:11:28.535380 | orchestrator | 2025-09-11 01:11:28 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:11:28.535486 | orchestrator | 2025-09-11 01:11:28 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:11:31.576769 | orchestrator | 2025-09-11 01:11:31 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:11:31.576850 | orchestrator | 2025-09-11 01:11:31 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:11:34.625739 | orchestrator | 2025-09-11 01:11:34 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:11:34.625839 | orchestrator | 2025-09-11 01:11:34 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:11:37.667115 | orchestrator | 2025-09-11 01:11:37 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:11:37.667237 | orchestrator | 2025-09-11 01:11:37 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:11:40.706461 | orchestrator | 2025-09-11 01:11:40 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:11:40.706553 | orchestrator | 2025-09-11 01:11:40 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:11:43.751682 | orchestrator | 2025-09-11 01:11:43 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:11:43.751784 | orchestrator | 2025-09-11 01:11:43 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:11:46.796516 | orchestrator | 2025-09-11 01:11:46 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:11:46.796618 | orchestrator | 2025-09-11 01:11:46 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:11:49.840131 | orchestrator | 2025-09-11 01:11:49 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:11:49.840241 | orchestrator | 2025-09-11 01:11:49 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:11:52.887977 | orchestrator | 2025-09-11 01:11:52 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:11:52.888076 | orchestrator | 2025-09-11 01:11:52 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:11:55.926500 | orchestrator | 2025-09-11 01:11:55 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state STARTED 2025-09-11 01:11:55.926601 | orchestrator | 2025-09-11 01:11:55 | INFO  | Wait 1 second(s) until the next check 2025-09-11 01:11:58.972719 | orchestrator | 2025-09-11 01:11:58 | INFO  | Task 56fd0a17-5ca9-4395-b10b-cb73f7b27897 is in state SUCCESS 2025-09-11 01:11:58.974534 | orchestrator | 2025-09-11 01:11:58.974578 | orchestrator | 2025-09-11 01:11:58.974592 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-11 01:11:58.974630 | orchestrator | 2025-09-11 01:11:58.974642 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-11 01:11:58.974654 | orchestrator | Thursday 11 September 2025 01:07:22 +0000 (0:00:00.249) 0:00:00.249 **** 2025-09-11 01:11:58.974665 | orchestrator | ok: [testbed-node-0] 2025-09-11 01:11:58.974677 | orchestrator | ok: [testbed-node-1] 2025-09-11 01:11:58.974688 | orchestrator | ok: [testbed-node-2] 2025-09-11 01:11:58.974699 | orchestrator | 2025-09-11 01:11:58.974710 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-11 01:11:58.974842 | orchestrator | Thursday 11 September 2025 01:07:22 +0000 (0:00:00.276) 0:00:00.526 **** 2025-09-11 01:11:58.975290 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-09-11 01:11:58.975543 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-09-11 01:11:58.975557 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-09-11 01:11:58.975587 | orchestrator | 2025-09-11 01:11:58.975598 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-09-11 01:11:58.975609 | orchestrator | 2025-09-11 01:11:58.975620 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-11 01:11:58.975630 | orchestrator | Thursday 11 September 2025 01:07:23 +0000 (0:00:00.363) 0:00:00.890 **** 2025-09-11 01:11:58.975641 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 01:11:58.975653 | orchestrator | 2025-09-11 01:11:58.975664 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-09-11 01:11:58.975675 | orchestrator | Thursday 11 September 2025 01:07:23 +0000 (0:00:00.506) 0:00:01.396 **** 2025-09-11 01:11:58.975686 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-09-11 01:11:58.975697 | orchestrator | 2025-09-11 01:11:58.975708 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-09-11 01:11:58.975718 | orchestrator | Thursday 11 September 2025 01:07:27 +0000 (0:00:03.743) 0:00:05.139 **** 2025-09-11 01:11:58.975729 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-09-11 01:11:58.975740 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-09-11 01:11:58.975750 | orchestrator | 2025-09-11 01:11:58.975761 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-09-11 01:11:58.975772 | orchestrator | Thursday 11 September 2025 01:07:34 +0000 (0:00:06.833) 0:00:11.973 **** 2025-09-11 01:11:58.975782 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-11 01:11:58.975793 | orchestrator | 2025-09-11 01:11:58.975804 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-09-11 01:11:58.975815 | orchestrator | Thursday 11 September 2025 01:07:37 +0000 (0:00:03.545) 0:00:15.519 **** 2025-09-11 01:11:58.975825 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-11 01:11:58.975836 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-11 01:11:58.975847 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-11 01:11:58.975857 | orchestrator | 2025-09-11 01:11:58.975868 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-09-11 01:11:58.975878 | orchestrator | Thursday 11 September 2025 01:07:46 +0000 (0:00:08.377) 0:00:23.896 **** 2025-09-11 01:11:58.975889 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-11 01:11:58.975900 | orchestrator | 2025-09-11 01:11:58.975910 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-09-11 01:11:58.975936 | orchestrator | Thursday 11 September 2025 01:07:49 +0000 (0:00:03.584) 0:00:27.481 **** 2025-09-11 01:11:58.975947 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-11 01:11:58.975958 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-11 01:11:58.975968 | orchestrator | 2025-09-11 01:11:58.975979 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-09-11 01:11:58.976002 | orchestrator | Thursday 11 September 2025 01:07:57 +0000 (0:00:07.742) 0:00:35.223 **** 2025-09-11 01:11:58.976012 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-09-11 01:11:58.976023 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-09-11 01:11:58.976034 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-09-11 01:11:58.976045 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-09-11 01:11:58.976055 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-09-11 01:11:58.976066 | orchestrator | 2025-09-11 01:11:58.976076 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-11 01:11:58.976087 | orchestrator | Thursday 11 September 2025 01:08:13 +0000 (0:00:16.161) 0:00:51.385 **** 2025-09-11 01:11:58.976098 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 01:11:58.976108 | orchestrator | 2025-09-11 01:11:58.976119 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-09-11 01:11:58.976130 | orchestrator | Thursday 11 September 2025 01:08:14 +0000 (0:00:00.518) 0:00:51.904 **** 2025-09-11 01:11:58.976140 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:11:58.976151 | orchestrator | 2025-09-11 01:11:58.976161 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-09-11 01:11:58.976172 | orchestrator | Thursday 11 September 2025 01:08:18 +0000 (0:00:04.666) 0:00:56.571 **** 2025-09-11 01:11:58.976182 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:11:58.976193 | orchestrator | 2025-09-11 01:11:58.976204 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-09-11 01:11:58.976256 | orchestrator | Thursday 11 September 2025 01:08:23 +0000 (0:00:04.502) 0:01:01.073 **** 2025-09-11 01:11:58.976292 | orchestrator | ok: [testbed-node-0] 2025-09-11 01:11:58.976302 | orchestrator | 2025-09-11 01:11:58.976313 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-09-11 01:11:58.976324 | orchestrator | Thursday 11 September 2025 01:08:26 +0000 (0:00:03.215) 0:01:04.289 **** 2025-09-11 01:11:58.976334 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-09-11 01:11:58.976345 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-09-11 01:11:58.976356 | orchestrator | 2025-09-11 01:11:58.976366 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-09-11 01:11:58.976377 | orchestrator | Thursday 11 September 2025 01:08:36 +0000 (0:00:09.821) 0:01:14.111 **** 2025-09-11 01:11:58.976387 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-09-11 01:11:58.976398 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-09-11 01:11:58.976411 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-09-11 01:11:58.976423 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-09-11 01:11:58.976434 | orchestrator | 2025-09-11 01:11:58.976445 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-09-11 01:11:58.976455 | orchestrator | Thursday 11 September 2025 01:08:53 +0000 (0:00:17.063) 0:01:31.174 **** 2025-09-11 01:11:58.976466 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:11:58.976477 | orchestrator | 2025-09-11 01:11:58.976487 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-09-11 01:11:58.976498 | orchestrator | Thursday 11 September 2025 01:08:58 +0000 (0:00:04.742) 0:01:35.917 **** 2025-09-11 01:11:58.976508 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:11:58.976519 | orchestrator | 2025-09-11 01:11:58.976529 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-09-11 01:11:58.976566 | orchestrator | Thursday 11 September 2025 01:09:04 +0000 (0:00:06.841) 0:01:42.759 **** 2025-09-11 01:11:58.976577 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:11:58.976587 | orchestrator | 2025-09-11 01:11:58.976598 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-09-11 01:11:58.976609 | orchestrator | Thursday 11 September 2025 01:09:05 +0000 (0:00:00.228) 0:01:42.987 **** 2025-09-11 01:11:58.976619 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:11:58.976630 | orchestrator | 2025-09-11 01:11:58.976640 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-11 01:11:58.976651 | orchestrator | Thursday 11 September 2025 01:09:09 +0000 (0:00:04.800) 0:01:47.788 **** 2025-09-11 01:11:58.976662 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 01:11:58.976672 | orchestrator | 2025-09-11 01:11:58.976683 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-09-11 01:11:58.976694 | orchestrator | Thursday 11 September 2025 01:09:10 +0000 (0:00:00.977) 0:01:48.765 **** 2025-09-11 01:11:58.976704 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:11:58.976715 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:11:58.976726 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:11:58.976736 | orchestrator | 2025-09-11 01:11:58.976747 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-09-11 01:11:58.976764 | orchestrator | Thursday 11 September 2025 01:09:16 +0000 (0:00:05.810) 0:01:54.576 **** 2025-09-11 01:11:58.976775 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:11:58.976785 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:11:58.976796 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:11:58.976807 | orchestrator | 2025-09-11 01:11:58.976817 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-09-11 01:11:58.976828 | orchestrator | Thursday 11 September 2025 01:09:21 +0000 (0:00:04.626) 0:01:59.203 **** 2025-09-11 01:11:58.976839 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:11:58.976849 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:11:58.976860 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:11:58.976870 | orchestrator | 2025-09-11 01:11:58.976881 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-09-11 01:11:58.976892 | orchestrator | Thursday 11 September 2025 01:09:22 +0000 (0:00:00.855) 0:02:00.058 **** 2025-09-11 01:11:58.976903 | orchestrator | ok: [testbed-node-0] 2025-09-11 01:11:58.976913 | orchestrator | ok: [testbed-node-2] 2025-09-11 01:11:58.976924 | orchestrator | ok: [testbed-node-1] 2025-09-11 01:11:58.976934 | orchestrator | 2025-09-11 01:11:58.976945 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-09-11 01:11:58.976956 | orchestrator | Thursday 11 September 2025 01:09:24 +0000 (0:00:01.938) 0:02:01.997 **** 2025-09-11 01:11:58.976966 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:11:58.976977 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:11:58.976987 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:11:58.976998 | orchestrator | 2025-09-11 01:11:58.977008 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-09-11 01:11:58.977019 | orchestrator | Thursday 11 September 2025 01:09:25 +0000 (0:00:01.306) 0:02:03.303 **** 2025-09-11 01:11:58.977030 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:11:58.977040 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:11:58.977051 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:11:58.977062 | orchestrator | 2025-09-11 01:11:58.977072 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-09-11 01:11:58.977083 | orchestrator | Thursday 11 September 2025 01:09:26 +0000 (0:00:01.201) 0:02:04.505 **** 2025-09-11 01:11:58.977094 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:11:58.977104 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:11:58.977115 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:11:58.977126 | orchestrator | 2025-09-11 01:11:58.977180 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-09-11 01:11:58.977194 | orchestrator | Thursday 11 September 2025 01:09:28 +0000 (0:00:01.914) 0:02:06.419 **** 2025-09-11 01:11:58.977204 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:11:58.977215 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:11:58.977225 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:11:58.977236 | orchestrator | 2025-09-11 01:11:58.977247 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-09-11 01:11:58.977293 | orchestrator | Thursday 11 September 2025 01:09:30 +0000 (0:00:01.554) 0:02:07.974 **** 2025-09-11 01:11:58.977304 | orchestrator | ok: [testbed-node-0] 2025-09-11 01:11:58.977315 | orchestrator | ok: [testbed-node-1] 2025-09-11 01:11:58.977326 | orchestrator | ok: [testbed-node-2] 2025-09-11 01:11:58.977336 | orchestrator | 2025-09-11 01:11:58.977347 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-09-11 01:11:58.977358 | orchestrator | Thursday 11 September 2025 01:09:31 +0000 (0:00:00.859) 0:02:08.833 **** 2025-09-11 01:11:58.977368 | orchestrator | ok: [testbed-node-2] 2025-09-11 01:11:58.977379 | orchestrator | ok: [testbed-node-1] 2025-09-11 01:11:58.977390 | orchestrator | ok: [testbed-node-0] 2025-09-11 01:11:58.977400 | orchestrator | 2025-09-11 01:11:58.977411 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-11 01:11:58.977421 | orchestrator | Thursday 11 September 2025 01:09:33 +0000 (0:00:02.666) 0:02:11.500 **** 2025-09-11 01:11:58.977432 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 01:11:58.977443 | orchestrator | 2025-09-11 01:11:58.977454 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-09-11 01:11:58.977464 | orchestrator | Thursday 11 September 2025 01:09:34 +0000 (0:00:00.506) 0:02:12.006 **** 2025-09-11 01:11:58.977475 | orchestrator | ok: [testbed-node-0] 2025-09-11 01:11:58.977485 | orchestrator | 2025-09-11 01:11:58.977496 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-09-11 01:11:58.977507 | orchestrator | Thursday 11 September 2025 01:09:37 +0000 (0:00:03.787) 0:02:15.794 **** 2025-09-11 01:11:58.977517 | orchestrator | ok: [testbed-node-0] 2025-09-11 01:11:58.977528 | orchestrator | 2025-09-11 01:11:58.977538 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-09-11 01:11:58.977549 | orchestrator | Thursday 11 September 2025 01:09:41 +0000 (0:00:03.325) 0:02:19.119 **** 2025-09-11 01:11:58.977560 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-09-11 01:11:58.977570 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-09-11 01:11:58.977581 | orchestrator | 2025-09-11 01:11:58.977592 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-09-11 01:11:58.977602 | orchestrator | Thursday 11 September 2025 01:09:48 +0000 (0:00:06.785) 0:02:25.905 **** 2025-09-11 01:11:58.977613 | orchestrator | ok: [testbed-node-0] 2025-09-11 01:11:58.977623 | orchestrator | 2025-09-11 01:11:58.977634 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-09-11 01:11:58.977645 | orchestrator | Thursday 11 September 2025 01:09:51 +0000 (0:00:03.353) 0:02:29.258 **** 2025-09-11 01:11:58.977660 | orchestrator | ok: [testbed-node-0] 2025-09-11 01:11:58.977671 | orchestrator | ok: [testbed-node-1] 2025-09-11 01:11:58.977682 | orchestrator | ok: [testbed-node-2] 2025-09-11 01:11:58.977692 | orchestrator | 2025-09-11 01:11:58.977703 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-09-11 01:11:58.977713 | orchestrator | Thursday 11 September 2025 01:09:51 +0000 (0:00:00.294) 0:02:29.553 **** 2025-09-11 01:11:58.977733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-11 01:11:58.977792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-11 01:11:58.977806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-11 01:11:58.977819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-11 01:11:58.977832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-11 01:11:58.977843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-11 01:11:58.977867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-11 01:11:58.977879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-11 01:11:58.977920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-11 01:11:58.977934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-11 01:11:58.977946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-11 01:11:58.977957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-11 01:11:58.977973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:11:58.977993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:11:58.978004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:11:58.978063 | orchestrator | 2025-09-11 01:11:58.978077 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-09-11 01:11:58.978088 | orchestrator | Thursday 11 September 2025 01:09:54 +0000 (0:00:02.554) 0:02:32.108 **** 2025-09-11 01:11:58.978099 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:11:58.978110 | orchestrator | 2025-09-11 01:11:58.978155 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-09-11 01:11:58.978167 | orchestrator | Thursday 11 September 2025 01:09:54 +0000 (0:00:00.162) 0:02:32.271 **** 2025-09-11 01:11:58.978178 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:11:58.978189 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:11:58.978199 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:11:58.978210 | orchestrator | 2025-09-11 01:11:58.978221 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-09-11 01:11:58.978232 | orchestrator | Thursday 11 September 2025 01:09:54 +0000 (0:00:00.460) 0:02:32.732 **** 2025-09-11 01:11:58.978243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-11 01:11:58.978255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-11 01:11:58.978295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-11 01:11:58.978307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-11 01:11:58.978318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-11 01:11:58.978330 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:11:58.978404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-11 01:11:58.978419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-11 01:11:58.978430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-11 01:11:58.978448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-11 01:11:58.978464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-11 01:11:58.978475 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:11:58.978487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-11 01:11:58.978533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-11 01:11:58.978548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-11 01:11:58.978559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-11 01:11:58.978578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-11 01:11:58.978589 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:11:58.978600 | orchestrator | 2025-09-11 01:11:58.978611 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-11 01:11:58.978622 | orchestrator | Thursday 11 September 2025 01:09:55 +0000 (0:00:00.689) 0:02:33.421 **** 2025-09-11 01:11:58.978633 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-11 01:11:58.978643 | orchestrator | 2025-09-11 01:11:58.978654 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-09-11 01:11:58.978670 | orchestrator | Thursday 11 September 2025 01:09:56 +0000 (0:00:00.470) 0:02:33.892 **** 2025-09-11 01:11:58.978682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-11 01:11:58.978725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-11 01:11:58.978739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-11 01:11:58.978763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-11 01:11:58.978774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-11 01:11:58.978791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-11 01:11:58.978803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-11 01:11:58.978814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-11 01:11:58.978831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-11 01:11:58.978843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-11 01:11:58.978861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-11 01:11:58.978872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-11 01:11:58.978888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:11:58.978900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:11:58.978920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:11:58.978932 | orchestrator | 2025-09-11 01:11:58.978942 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-09-11 01:11:58.978953 | orchestrator | Thursday 11 September 2025 01:10:00 +0000 (0:00:04.915) 0:02:38.808 **** 2025-09-11 01:11:58.978965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-11 01:11:58.978984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-11 01:11:58.978996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-11 01:11:58.979012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-11 01:11:58.979023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-11 01:11:58.979034 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:11:58.979052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-11 01:11:58.979063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-11 01:11:58.979082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-11 01:11:58.979094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-11 01:11:58.979110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-11 01:11:58.979121 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:11:58.979133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-11 01:11:58.979149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-11 01:11:58.979161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-11 01:11:58.979179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-11 01:11:58.979191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-11 01:11:58.979202 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:11:58.979212 | orchestrator | 2025-09-11 01:11:58.979223 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-09-11 01:11:58.979234 | orchestrator | Thursday 11 September 2025 01:10:01 +0000 (0:00:00.874) 0:02:39.682 **** 2025-09-11 01:11:58.979250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-11 01:11:58.979281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-11 01:11:58.979293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-11 01:11:58.979318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-11 01:11:58.979330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-11 01:11:58.979342 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:11:58.979353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-11 01:11:58.979369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-11 01:11:58.979381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-11 01:11:58.979392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-11 01:11:58.979417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-11 01:11:58.979429 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:11:58.979440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-11 01:11:58.979451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-11 01:11:58.979462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-11 01:11:58.979479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-11 01:11:58.979490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-11 01:11:58.979508 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:11:58.979519 | orchestrator | 2025-09-11 01:11:58.979530 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-09-11 01:11:58.979541 | orchestrator | Thursday 11 September 2025 01:10:02 +0000 (0:00:00.829) 0:02:40.512 **** 2025-09-11 01:11:58.979559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-11 01:11:58.979571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-11 01:11:58.979590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-11 01:11:58.979602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-11 01:11:58.979613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-11 01:11:58.979633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-11 01:11:58.979651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-11 01:11:58.979663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-11 01:11:58.979674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-11 01:11:58.979685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-11 01:11:58.979701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-11 01:11:58.979713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-11 01:11:58.979737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:11:58.979748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:11:58.979760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:11:58.979770 | orchestrator | 2025-09-11 01:11:58.979781 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-09-11 01:11:58.979792 | orchestrator | Thursday 11 September 2025 01:10:07 +0000 (0:00:05.269) 0:02:45.782 **** 2025-09-11 01:11:58.979803 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-11 01:11:58.979815 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-11 01:11:58.979826 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-11 01:11:58.979836 | orchestrator | 2025-09-11 01:11:58.979847 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-09-11 01:11:58.979858 | orchestrator | Thursday 11 September 2025 01:10:10 +0000 (0:00:02.122) 0:02:47.904 **** 2025-09-11 01:11:58.979874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-11 01:11:58.979892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-11 01:11:58.979911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-11 01:11:58.979923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-11 01:11:58.979934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-11 01:11:58.979945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-11 01:11:58.979961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-11 01:11:58.979979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-11 01:11:58.979990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-11 01:11:58.980007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-11 01:11:58.980018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-11 01:11:58.980029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-11 01:11:58.980041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:11:58.980057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:11:58.980075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:11:58.980086 | orchestrator | 2025-09-11 01:11:58.980097 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-09-11 01:11:58.980108 | orchestrator | Thursday 11 September 2025 01:10:25 +0000 (0:00:15.079) 0:03:02.984 **** 2025-09-11 01:11:58.980119 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:11:58.980130 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:11:58.980140 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:11:58.980151 | orchestrator | 2025-09-11 01:11:58.980161 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-09-11 01:11:58.980172 | orchestrator | Thursday 11 September 2025 01:10:26 +0000 (0:00:01.541) 0:03:04.525 **** 2025-09-11 01:11:58.980182 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-11 01:11:58.980193 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-11 01:11:58.980209 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-11 01:11:58.980220 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-11 01:11:58.980230 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-11 01:11:58.980241 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-11 01:11:58.980252 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-11 01:11:58.980319 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-11 01:11:58.980331 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-11 01:11:58.980341 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-11 01:11:58.980352 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-11 01:11:58.980362 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-11 01:11:58.980373 | orchestrator | 2025-09-11 01:11:58.980384 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-09-11 01:11:58.980394 | orchestrator | Thursday 11 September 2025 01:10:31 +0000 (0:00:05.168) 0:03:09.693 **** 2025-09-11 01:11:58.980405 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-11 01:11:58.980416 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-11 01:11:58.980427 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-11 01:11:58.980437 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-11 01:11:58.980448 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-11 01:11:58.980458 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-11 01:11:58.980469 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-11 01:11:58.980480 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-11 01:11:58.980490 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-11 01:11:58.980501 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-11 01:11:58.980518 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-11 01:11:58.980529 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-11 01:11:58.980540 | orchestrator | 2025-09-11 01:11:58.980550 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-09-11 01:11:58.980561 | orchestrator | Thursday 11 September 2025 01:10:36 +0000 (0:00:05.070) 0:03:14.763 **** 2025-09-11 01:11:58.980571 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-11 01:11:58.980582 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-11 01:11:58.980592 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-11 01:11:58.980603 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-11 01:11:58.980614 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-11 01:11:58.980624 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-11 01:11:58.980635 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-11 01:11:58.980645 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-11 01:11:58.980656 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-11 01:11:58.980666 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-11 01:11:58.980677 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-11 01:11:58.980693 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-11 01:11:58.980702 | orchestrator | 2025-09-11 01:11:58.980712 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-09-11 01:11:58.980721 | orchestrator | Thursday 11 September 2025 01:10:41 +0000 (0:00:05.042) 0:03:19.805 **** 2025-09-11 01:11:58.980731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-11 01:11:58.980749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-11 01:11:58.980759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-11 01:11:58.980775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-11 01:11:58.980785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-11 01:11:58.980800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-11 01:11:58.980810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-11 01:11:58.980825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-11 01:11:58.980835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-11 01:11:58.980851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-11 01:11:58.980861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-11 01:11:58.980880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-11 01:11:58.980890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:11:58.980900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:11:58.980916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-11 01:11:58.980926 | orchestrator | 2025-09-11 01:11:58.980936 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-11 01:11:58.980946 | orchestrator | Thursday 11 September 2025 01:10:45 +0000 (0:00:03.726) 0:03:23.532 **** 2025-09-11 01:11:58.980961 | orchestrator | skipping: [testbed-node-0] 2025-09-11 01:11:58.980970 | orchestrator | skipping: [testbed-node-1] 2025-09-11 01:11:58.980980 | orchestrator | skipping: [testbed-node-2] 2025-09-11 01:11:58.980989 | orchestrator | 2025-09-11 01:11:58.980998 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-09-11 01:11:58.981008 | orchestrator | Thursday 11 September 2025 01:10:46 +0000 (0:00:00.317) 0:03:23.849 **** 2025-09-11 01:11:58.981017 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:11:58.981027 | orchestrator | 2025-09-11 01:11:58.981036 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-09-11 01:11:58.981046 | orchestrator | Thursday 11 September 2025 01:10:48 +0000 (0:00:02.138) 0:03:25.988 **** 2025-09-11 01:11:58.981055 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:11:58.981064 | orchestrator | 2025-09-11 01:11:58.981074 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-09-11 01:11:58.981083 | orchestrator | Thursday 11 September 2025 01:10:50 +0000 (0:00:02.218) 0:03:28.207 **** 2025-09-11 01:11:58.981093 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:11:58.981102 | orchestrator | 2025-09-11 01:11:58.981112 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-09-11 01:11:58.981121 | orchestrator | Thursday 11 September 2025 01:10:52 +0000 (0:00:02.220) 0:03:30.427 **** 2025-09-11 01:11:58.981131 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:11:58.981140 | orchestrator | 2025-09-11 01:11:58.981149 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-09-11 01:11:58.981159 | orchestrator | Thursday 11 September 2025 01:10:54 +0000 (0:00:02.203) 0:03:32.631 **** 2025-09-11 01:11:58.981168 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:11:58.981177 | orchestrator | 2025-09-11 01:11:58.981187 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-11 01:11:58.981196 | orchestrator | Thursday 11 September 2025 01:11:15 +0000 (0:00:20.417) 0:03:53.049 **** 2025-09-11 01:11:58.981206 | orchestrator | 2025-09-11 01:11:58.981215 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-11 01:11:58.981225 | orchestrator | Thursday 11 September 2025 01:11:15 +0000 (0:00:00.066) 0:03:53.115 **** 2025-09-11 01:11:58.981234 | orchestrator | 2025-09-11 01:11:58.981243 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-11 01:11:58.981253 | orchestrator | Thursday 11 September 2025 01:11:15 +0000 (0:00:00.066) 0:03:53.182 **** 2025-09-11 01:11:58.981278 | orchestrator | 2025-09-11 01:11:58.981287 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-09-11 01:11:58.981297 | orchestrator | Thursday 11 September 2025 01:11:15 +0000 (0:00:00.064) 0:03:53.246 **** 2025-09-11 01:11:58.981306 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:11:58.981315 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:11:58.981325 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:11:58.981334 | orchestrator | 2025-09-11 01:11:58.981344 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-09-11 01:11:58.981353 | orchestrator | Thursday 11 September 2025 01:11:26 +0000 (0:00:10.599) 0:04:03.846 **** 2025-09-11 01:11:58.981362 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:11:58.981372 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:11:58.981381 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:11:58.981391 | orchestrator | 2025-09-11 01:11:58.981404 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-09-11 01:11:58.981414 | orchestrator | Thursday 11 September 2025 01:11:34 +0000 (0:00:08.106) 0:04:11.952 **** 2025-09-11 01:11:58.981424 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:11:58.981433 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:11:58.981442 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:11:58.981452 | orchestrator | 2025-09-11 01:11:58.981461 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-09-11 01:11:58.981470 | orchestrator | Thursday 11 September 2025 01:11:39 +0000 (0:00:05.394) 0:04:17.347 **** 2025-09-11 01:11:58.981486 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:11:58.981495 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:11:58.981504 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:11:58.981514 | orchestrator | 2025-09-11 01:11:58.981523 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-09-11 01:11:58.981532 | orchestrator | Thursday 11 September 2025 01:11:50 +0000 (0:00:10.739) 0:04:28.087 **** 2025-09-11 01:11:58.981542 | orchestrator | changed: [testbed-node-0] 2025-09-11 01:11:58.981551 | orchestrator | changed: [testbed-node-1] 2025-09-11 01:11:58.981561 | orchestrator | changed: [testbed-node-2] 2025-09-11 01:11:58.981570 | orchestrator | 2025-09-11 01:11:58.981579 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-11 01:11:58.981589 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-11 01:11:58.981599 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-11 01:11:58.981609 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-11 01:11:58.981618 | orchestrator | 2025-09-11 01:11:58.981627 | orchestrator | 2025-09-11 01:11:58.981637 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-11 01:11:58.981646 | orchestrator | Thursday 11 September 2025 01:11:55 +0000 (0:00:05.560) 0:04:33.648 **** 2025-09-11 01:11:58.981660 | orchestrator | =============================================================================== 2025-09-11 01:11:58.981670 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 20.42s 2025-09-11 01:11:58.981680 | orchestrator | octavia : Add rules for security groups -------------------------------- 17.06s 2025-09-11 01:11:58.981689 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.16s 2025-09-11 01:11:58.981698 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 15.08s 2025-09-11 01:11:58.981708 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.74s 2025-09-11 01:11:58.981717 | orchestrator | octavia : Restart octavia-api container -------------------------------- 10.60s 2025-09-11 01:11:58.981726 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.82s 2025-09-11 01:11:58.981736 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.38s 2025-09-11 01:11:58.981745 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 8.11s 2025-09-11 01:11:58.981754 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.74s 2025-09-11 01:11:58.981764 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 6.84s 2025-09-11 01:11:58.981773 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.83s 2025-09-11 01:11:58.981782 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.79s 2025-09-11 01:11:58.981792 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.81s 2025-09-11 01:11:58.981801 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 5.56s 2025-09-11 01:11:58.981810 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.39s 2025-09-11 01:11:58.981820 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.27s 2025-09-11 01:11:58.981829 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.17s 2025-09-11 01:11:58.981838 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.07s 2025-09-11 01:11:58.981848 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.04s 2025-09-11 01:11:58.981857 | orchestrator | 2025-09-11 01:11:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-11 01:12:02.018673 | orchestrator | 2025-09-11 01:12:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-11 01:12:05.064819 | orchestrator | 2025-09-11 01:12:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-11 01:12:08.108702 | orchestrator | 2025-09-11 01:12:08 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-11 01:12:11.152820 | orchestrator | 2025-09-11 01:12:11 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-11 01:12:14.196227 | orchestrator | 2025-09-11 01:12:14 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-11 01:12:17.238394 | orchestrator | 2025-09-11 01:12:17 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-11 01:12:20.278615 | orchestrator | 2025-09-11 01:12:20 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-11 01:12:23.314641 | orchestrator | 2025-09-11 01:12:23 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-11 01:12:26.350827 | orchestrator | 2025-09-11 01:12:26 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-11 01:12:29.399282 | orchestrator | 2025-09-11 01:12:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-11 01:12:32.436498 | orchestrator | 2025-09-11 01:12:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-11 01:12:35.477629 | orchestrator | 2025-09-11 01:12:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-11 01:12:38.520354 | orchestrator | 2025-09-11 01:12:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-11 01:12:41.559800 | orchestrator | 2025-09-11 01:12:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-11 01:12:44.597340 | orchestrator | 2025-09-11 01:12:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-11 01:12:47.635707 | orchestrator | 2025-09-11 01:12:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-11 01:12:50.675047 | orchestrator | 2025-09-11 01:12:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-11 01:12:53.714797 | orchestrator | 2025-09-11 01:12:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-11 01:12:56.756782 | orchestrator | 2025-09-11 01:12:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-11 01:12:59.798299 | orchestrator | 2025-09-11 01:13:00.045321 | orchestrator | 2025-09-11 01:13:00.046707 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Thu Sep 11 01:13:00 UTC 2025 2025-09-11 01:13:00.046744 | orchestrator | 2025-09-11 01:13:00.336784 | orchestrator | ok: Runtime: 0:33:25.325716 2025-09-11 01:13:00.587159 | 2025-09-11 01:13:00.587351 | TASK [Bootstrap services] 2025-09-11 01:13:01.290568 | orchestrator | 2025-09-11 01:13:01.290709 | orchestrator | # BOOTSTRAP 2025-09-11 01:13:01.290729 | orchestrator | 2025-09-11 01:13:01.290743 | orchestrator | + set -e 2025-09-11 01:13:01.290756 | orchestrator | + echo 2025-09-11 01:13:01.290769 | orchestrator | + echo '# BOOTSTRAP' 2025-09-11 01:13:01.290786 | orchestrator | + echo 2025-09-11 01:13:01.290827 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-09-11 01:13:01.299419 | orchestrator | + set -e 2025-09-11 01:13:01.299524 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-09-11 01:13:04.682642 | orchestrator | 2025-09-11 01:13:04 | INFO  | It takes a moment until task 5efb3837-7d22-47a9-ae6d-0486821c9228 (flavor-manager) has been started and output is visible here. 2025-09-11 01:13:07.832906 | orchestrator | ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ 2025-09-11 01:13:07.833000 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_flavor_manager/main.py:194 │ 2025-09-11 01:13:07.833022 | orchestrator | │ in run │ 2025-09-11 01:13:07.833035 | orchestrator | │ │ 2025-09-11 01:13:07.833046 | orchestrator | │ 191 │ logger.add(sys.stderr, format=log_fmt, level=level, colorize=True) │ 2025-09-11 01:13:07.833067 | orchestrator | │ 192 │ │ 2025-09-11 01:13:07.833078 | orchestrator | │ 193 │ definitions = get_flavor_definitions(name, url) │ 2025-09-11 01:13:07.833091 | orchestrator | │ ❱ 194 │ manager = FlavorManager( │ 2025-09-11 01:13:07.833102 | orchestrator | │ 195 │ │ cloud=Cloud(cloud), │ 2025-09-11 01:13:07.833114 | orchestrator | │ 196 │ │ definitions=definitions, │ 2025-09-11 01:13:07.833125 | orchestrator | │ 197 │ │ recommended=recommended, │ 2025-09-11 01:13:07.833136 | orchestrator | │ │ 2025-09-11 01:13:07.833148 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-09-11 01:13:07.833170 | orchestrator | │ │ cloud = 'admin' │ │ 2025-09-11 01:13:07.833182 | orchestrator | │ │ debug = False │ │ 2025-09-11 01:13:07.833193 | orchestrator | │ │ definitions = { │ │ 2025-09-11 01:13:07.833232 | orchestrator | │ │ │ 'reference': [ │ │ 2025-09-11 01:13:07.833243 | orchestrator | │ │ │ │ {'field': 'name', 'mandatory_prefix': 'SCS-'}, │ │ 2025-09-11 01:13:07.833254 | orchestrator | │ │ │ │ {'field': 'cpus'}, │ │ 2025-09-11 01:13:07.833265 | orchestrator | │ │ │ │ {'field': 'ram'}, │ │ 2025-09-11 01:13:07.833276 | orchestrator | │ │ │ │ {'field': 'disk'}, │ │ 2025-09-11 01:13:07.833287 | orchestrator | │ │ │ │ {'field': 'public', 'default': True}, │ │ 2025-09-11 01:13:07.833299 | orchestrator | │ │ │ │ {'field': 'disabled', 'default': False} │ │ 2025-09-11 01:13:07.833310 | orchestrator | │ │ │ ], │ │ 2025-09-11 01:13:07.833321 | orchestrator | │ │ │ 'mandatory': [ │ │ 2025-09-11 01:13:07.833332 | orchestrator | │ │ │ │ { │ │ 2025-09-11 01:13:07.833343 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1', │ │ 2025-09-11 01:13:07.833377 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-11 01:13:07.833389 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-11 01:13:07.833400 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-11 01:13:07.833411 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-11 01:13:07.833422 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-11 01:13:07.833433 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:1', │ │ 2025-09-11 01:13:07.833443 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-1', │ │ 2025-09-11 01:13:07.833454 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-11 01:13:07.833465 | orchestrator | │ │ │ │ }, │ │ 2025-09-11 01:13:07.833476 | orchestrator | │ │ │ │ { │ │ 2025-09-11 01:13:07.833487 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1-5', │ │ 2025-09-11 01:13:07.833498 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-11 01:13:07.833508 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-11 01:13:07.833519 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-11 01:13:07.833530 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-11 01:13:07.833557 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-11 01:13:07.833568 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:5', │ │ 2025-09-11 01:13:07.833579 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-5', │ │ 2025-09-11 01:13:07.833590 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-11 01:13:07.833600 | orchestrator | │ │ │ │ }, │ │ 2025-09-11 01:13:07.833611 | orchestrator | │ │ │ │ { │ │ 2025-09-11 01:13:07.833622 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2', │ │ 2025-09-11 01:13:07.833638 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-11 01:13:07.833649 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-11 01:13:07.833660 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-11 01:13:07.833670 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-11 01:13:07.833682 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-11 01:13:07.833693 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2', │ │ 2025-09-11 01:13:07.833703 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2', │ │ 2025-09-11 01:13:07.833714 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-11 01:13:07.833725 | orchestrator | │ │ │ │ }, │ │ 2025-09-11 01:13:07.833736 | orchestrator | │ │ │ │ { │ │ 2025-09-11 01:13:07.833747 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2-5', │ │ 2025-09-11 01:13:07.833758 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-11 01:13:07.833776 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-11 01:13:07.833787 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-11 01:13:07.833798 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-11 01:13:07.833809 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-11 01:13:07.833820 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2:5', │ │ 2025-09-11 01:13:07.833830 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2-5', │ │ 2025-09-11 01:13:07.833841 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-11 01:13:07.833852 | orchestrator | │ │ │ │ }, │ │ 2025-09-11 01:13:07.833863 | orchestrator | │ │ │ │ { │ │ 2025-09-11 01:13:07.833873 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4', │ │ 2025-09-11 01:13:07.833884 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-11 01:13:07.833895 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-11 01:13:07.833906 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-11 01:13:07.833916 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-11 01:13:07.833927 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-11 01:13:07.833938 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4', │ │ 2025-09-11 01:13:07.833949 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4', │ │ 2025-09-11 01:13:07.833959 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-11 01:13:07.833970 | orchestrator | │ │ │ │ }, │ │ 2025-09-11 01:13:07.833981 | orchestrator | │ │ │ │ { │ │ 2025-09-11 01:13:07.833992 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4-10', │ │ 2025-09-11 01:13:07.834003 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-11 01:13:07.834163 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-11 01:13:07.834181 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-11 01:13:07.834219 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-11 01:13:07.861421 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-11 01:13:07.861478 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4:10', │ │ 2025-09-11 01:13:07.861490 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4-10', │ │ 2025-09-11 01:13:07.861501 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-11 01:13:07.861512 | orchestrator | │ │ │ │ }, │ │ 2025-09-11 01:13:07.861522 | orchestrator | │ │ │ │ { │ │ 2025-09-11 01:13:07.861533 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8', │ │ 2025-09-11 01:13:07.861544 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-11 01:13:07.861579 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-11 01:13:07.861591 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-11 01:13:07.861602 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-11 01:13:07.861613 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-11 01:13:07.861624 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8', │ │ 2025-09-11 01:13:07.861635 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8', │ │ 2025-09-11 01:13:07.861646 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-11 01:13:07.861657 | orchestrator | │ │ │ │ }, │ │ 2025-09-11 01:13:07.861668 | orchestrator | │ │ │ │ { │ │ 2025-09-11 01:13:07.861679 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8-20', │ │ 2025-09-11 01:13:07.861690 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-11 01:13:07.861701 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-11 01:13:07.861712 | orchestrator | │ │ │ │ │ 'disk': 20, │ │ 2025-09-11 01:13:07.861722 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-11 01:13:07.861733 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-11 01:13:07.861744 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8:20', │ │ 2025-09-11 01:13:07.861755 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8-20', │ │ 2025-09-11 01:13:07.861766 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-11 01:13:07.861777 | orchestrator | │ │ │ │ }, │ │ 2025-09-11 01:13:07.861788 | orchestrator | │ │ │ │ { │ │ 2025-09-11 01:13:07.861799 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4', │ │ 2025-09-11 01:13:07.861810 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-11 01:13:07.861823 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-11 01:13:07.861834 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-11 01:13:07.861845 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-11 01:13:07.861855 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-11 01:13:07.861866 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4', │ │ 2025-09-11 01:13:07.861877 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4', │ │ 2025-09-11 01:13:07.861888 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-11 01:13:07.861908 | orchestrator | │ │ │ │ }, │ │ 2025-09-11 01:13:07.861919 | orchestrator | │ │ │ │ { │ │ 2025-09-11 01:13:07.861930 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4-10', │ │ 2025-09-11 01:13:07.861941 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-11 01:13:07.861959 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-11 01:13:07.861970 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-11 01:13:07.861993 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-11 01:13:07.862005 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-11 01:13:07.862040 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4:10', │ │ 2025-09-11 01:13:07.862054 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4-10', │ │ 2025-09-11 01:13:07.862065 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-11 01:13:07.862076 | orchestrator | │ │ │ │ }, │ │ 2025-09-11 01:13:07.862086 | orchestrator | │ │ │ │ ... +19 │ │ 2025-09-11 01:13:07.862097 | orchestrator | │ │ │ ] │ │ 2025-09-11 01:13:07.862108 | orchestrator | │ │ } │ │ 2025-09-11 01:13:07.862119 | orchestrator | │ │ level = 'INFO' │ │ 2025-09-11 01:13:07.862130 | orchestrator | │ │ limit_memory = 32 │ │ 2025-09-11 01:13:07.862141 | orchestrator | │ │ log_fmt = '{time:YYYY-MM-DD HH:mm:ss} | │ │ 2025-09-11 01:13:07.862152 | orchestrator | │ │ {level: <8} | '+17 │ │ 2025-09-11 01:13:07.862163 | orchestrator | │ │ name = 'local' │ │ 2025-09-11 01:13:07.862174 | orchestrator | │ │ recommended = True │ │ 2025-09-11 01:13:07.862184 | orchestrator | │ │ url = None │ │ 2025-09-11 01:13:07.862196 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-09-11 01:13:07.862230 | orchestrator | │ │ 2025-09-11 01:13:07.862242 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_flavor_manager/main.py:101 │ 2025-09-11 01:13:07.862253 | orchestrator | │ in __init__ │ 2025-09-11 01:13:07.862263 | orchestrator | │ │ 2025-09-11 01:13:07.862274 | orchestrator | │ 98 │ │ self.required_flavors = definitions["mandatory"] │ 2025-09-11 01:13:07.862285 | orchestrator | │ 99 │ │ self.cloud = cloud │ 2025-09-11 01:13:07.862296 | orchestrator | │ 100 │ │ if recommended: │ 2025-09-11 01:13:07.862306 | orchestrator | │ ❱ 101 │ │ │ recommended_flavors = definitions["recommended"] │ 2025-09-11 01:13:07.862317 | orchestrator | │ 102 │ │ │ # Filter recommended flavors based on memory limit │ 2025-09-11 01:13:07.862328 | orchestrator | │ 103 │ │ │ limit_memory_mb = limit_memory * 1024 │ 2025-09-11 01:13:07.862338 | orchestrator | │ 104 │ │ │ filtered_recommended = [ │ 2025-09-11 01:13:07.862349 | orchestrator | │ │ 2025-09-11 01:13:07.862378 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-09-11 01:13:07.862408 | orchestrator | │ │ cloud = │ │ 2025-09-11 01:13:07.862430 | orchestrator | │ │ definitions = { │ │ 2025-09-11 01:13:07.862441 | orchestrator | │ │ │ 'reference': [ │ │ 2025-09-11 01:13:07.862452 | orchestrator | │ │ │ │ {'field': 'name', 'mandatory_prefix': 'SCS-'}, │ │ 2025-09-11 01:13:07.862463 | orchestrator | │ │ │ │ {'field': 'cpus'}, │ │ 2025-09-11 01:13:07.862474 | orchestrator | │ │ │ │ {'field': 'ram'}, │ │ 2025-09-11 01:13:07.862485 | orchestrator | │ │ │ │ {'field': 'disk'}, │ │ 2025-09-11 01:13:07.862496 | orchestrator | │ │ │ │ {'field': 'public', 'default': True}, │ │ 2025-09-11 01:13:07.862507 | orchestrator | │ │ │ │ {'field': 'disabled', 'default': False} │ │ 2025-09-11 01:13:07.862518 | orchestrator | │ │ │ ], │ │ 2025-09-11 01:13:07.862529 | orchestrator | │ │ │ 'mandatory': [ │ │ 2025-09-11 01:13:07.862547 | orchestrator | │ │ │ │ { │ │ 2025-09-11 01:13:07.895823 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1', │ │ 2025-09-11 01:13:07.895863 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-11 01:13:07.895877 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-11 01:13:07.895890 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-11 01:13:07.895901 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-11 01:13:07.895912 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-11 01:13:07.895923 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:1', │ │ 2025-09-11 01:13:07.895934 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-1', │ │ 2025-09-11 01:13:07.895946 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-11 01:13:07.895956 | orchestrator | │ │ │ │ }, │ │ 2025-09-11 01:13:07.895967 | orchestrator | │ │ │ │ { │ │ 2025-09-11 01:13:07.895978 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1-5', │ │ 2025-09-11 01:13:07.895988 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-11 01:13:07.895999 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-11 01:13:07.896010 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-11 01:13:07.896020 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-11 01:13:07.896031 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-11 01:13:07.896042 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:5', │ │ 2025-09-11 01:13:07.896053 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-5', │ │ 2025-09-11 01:13:07.896063 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-11 01:13:07.896085 | orchestrator | │ │ │ │ }, │ │ 2025-09-11 01:13:07.896095 | orchestrator | │ │ │ │ { │ │ 2025-09-11 01:13:07.896106 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2', │ │ 2025-09-11 01:13:07.896117 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-11 01:13:07.896128 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-11 01:13:07.896139 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-11 01:13:07.896149 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-11 01:13:07.896160 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-11 01:13:07.896171 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2', │ │ 2025-09-11 01:13:07.896182 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2', │ │ 2025-09-11 01:13:07.896192 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-11 01:13:07.896224 | orchestrator | │ │ │ │ }, │ │ 2025-09-11 01:13:07.896242 | orchestrator | │ │ │ │ { │ │ 2025-09-11 01:13:07.896254 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2-5', │ │ 2025-09-11 01:13:07.896264 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-11 01:13:07.896275 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-11 01:13:07.896286 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-11 01:13:07.896296 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-11 01:13:07.896307 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-11 01:13:07.896318 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2:5', │ │ 2025-09-11 01:13:07.896329 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2-5', │ │ 2025-09-11 01:13:07.896340 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-11 01:13:07.896351 | orchestrator | │ │ │ │ }, │ │ 2025-09-11 01:13:07.896371 | orchestrator | │ │ │ │ { │ │ 2025-09-11 01:13:07.896383 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4', │ │ 2025-09-11 01:13:07.896394 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-11 01:13:07.896404 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-11 01:13:07.896415 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-11 01:13:07.896426 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-11 01:13:07.896437 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-11 01:13:07.896448 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4', │ │ 2025-09-11 01:13:07.896458 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4', │ │ 2025-09-11 01:13:07.896469 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-11 01:13:07.896486 | orchestrator | │ │ │ │ }, │ │ 2025-09-11 01:13:07.896497 | orchestrator | │ │ │ │ { │ │ 2025-09-11 01:13:07.896508 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4-10', │ │ 2025-09-11 01:13:07.896519 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-11 01:13:07.896529 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-11 01:13:07.896540 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-11 01:13:07.896551 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-11 01:13:07.896562 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-11 01:13:07.896572 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4:10', │ │ 2025-09-11 01:13:07.896583 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4-10', │ │ 2025-09-11 01:13:07.896594 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-11 01:13:07.896605 | orchestrator | │ │ │ │ }, │ │ 2025-09-11 01:13:07.896616 | orchestrator | │ │ │ │ { │ │ 2025-09-11 01:13:07.896627 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8', │ │ 2025-09-11 01:13:07.896637 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-11 01:13:07.896648 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-11 01:13:07.896659 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-11 01:13:07.896669 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-11 01:13:07.896682 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-11 01:13:07.896693 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8', │ │ 2025-09-11 01:13:07.896703 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8', │ │ 2025-09-11 01:13:07.896714 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-11 01:13:07.896725 | orchestrator | │ │ │ │ }, │ │ 2025-09-11 01:13:07.896736 | orchestrator | │ │ │ │ { │ │ 2025-09-11 01:13:07.896747 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8-20', │ │ 2025-09-11 01:13:07.896757 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-11 01:13:07.896768 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-11 01:13:07.896779 | orchestrator | │ │ │ │ │ 'disk': 20, │ │ 2025-09-11 01:13:07.896789 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-11 01:13:07.896800 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-11 01:13:07.896811 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8:20', │ │ 2025-09-11 01:13:07.896821 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8-20', │ │ 2025-09-11 01:13:07.896832 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-11 01:13:07.896855 | orchestrator | │ │ │ │ }, │ │ 2025-09-11 01:13:07.905838 | orchestrator | │ │ │ │ { │ │ 2025-09-11 01:13:07.905919 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4', │ │ 2025-09-11 01:13:07.905968 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-11 01:13:07.905981 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-11 01:13:07.905992 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-11 01:13:07.906003 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-11 01:13:07.906044 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-11 01:13:07.906059 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4', │ │ 2025-09-11 01:13:07.906070 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4', │ │ 2025-09-11 01:13:07.906081 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-11 01:13:07.906091 | orchestrator | │ │ │ │ }, │ │ 2025-09-11 01:13:07.906102 | orchestrator | │ │ │ │ { │ │ 2025-09-11 01:13:07.906113 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4-10', │ │ 2025-09-11 01:13:07.906124 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-11 01:13:07.906134 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-11 01:13:07.906145 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-11 01:13:07.906156 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-11 01:13:07.906167 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-11 01:13:07.906177 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4:10', │ │ 2025-09-11 01:13:07.906188 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4-10', │ │ 2025-09-11 01:13:07.906217 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-11 01:13:07.906229 | orchestrator | │ │ │ │ }, │ │ 2025-09-11 01:13:07.906240 | orchestrator | │ │ │ │ ... +19 │ │ 2025-09-11 01:13:07.906251 | orchestrator | │ │ │ ] │ │ 2025-09-11 01:13:07.906262 | orchestrator | │ │ } │ │ 2025-09-11 01:13:07.906273 | orchestrator | │ │ limit_memory = 32 │ │ 2025-09-11 01:13:07.906283 | orchestrator | │ │ recommended = True │ │ 2025-09-11 01:13:07.906294 | orchestrator | │ │ self = │ │ 2025-09-11 01:13:07.906316 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-09-11 01:13:07.906329 | orchestrator | ╰──────────────────────────────────────────────────────────────────────────────╯ 2025-09-11 01:13:07.906359 | orchestrator | KeyError: 'recommended' 2025-09-11 01:13:08.201414 | orchestrator | ERROR 2025-09-11 01:13:08.201558 | orchestrator | { 2025-09-11 01:13:08.201594 | orchestrator | "delta": "0:00:07.175385", 2025-09-11 01:13:08.201629 | orchestrator | "end": "2025-09-11 01:13:08.097652", 2025-09-11 01:13:08.201652 | orchestrator | "msg": "non-zero return code", 2025-09-11 01:13:08.201671 | orchestrator | "rc": 1, 2025-09-11 01:13:08.201690 | orchestrator | "start": "2025-09-11 01:13:00.922267" 2025-09-11 01:13:08.201708 | orchestrator | } failure 2025-09-11 01:13:08.210722 | 2025-09-11 01:13:08.210798 | PLAY RECAP 2025-09-11 01:13:08.210858 | orchestrator | ok: 22 changed: 9 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2025-09-11 01:13:08.210885 | 2025-09-11 01:13:08.356289 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-09-11 01:13:08.357356 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-11 01:13:09.006267 | 2025-09-11 01:13:09.006392 | PLAY [Post output play] 2025-09-11 01:13:09.020700 | 2025-09-11 01:13:09.020811 | LOOP [stage-output : Register sources] 2025-09-11 01:13:09.074741 | 2025-09-11 01:13:09.074985 | TASK [stage-output : Check sudo] 2025-09-11 01:13:09.808732 | orchestrator | sudo: a password is required 2025-09-11 01:13:10.109593 | orchestrator | ok: Runtime: 0:00:00.008877 2025-09-11 01:13:10.124108 | 2025-09-11 01:13:10.124252 | LOOP [stage-output : Set source and destination for files and folders] 2025-09-11 01:13:10.165076 | 2025-09-11 01:13:10.165369 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-09-11 01:13:10.234780 | orchestrator | ok 2025-09-11 01:13:10.243659 | 2025-09-11 01:13:10.243784 | LOOP [stage-output : Ensure target folders exist] 2025-09-11 01:13:10.630109 | orchestrator | ok: "docs" 2025-09-11 01:13:10.630944 | 2025-09-11 01:13:10.838167 | orchestrator | ok: "artifacts" 2025-09-11 01:13:11.052780 | orchestrator | ok: "logs" 2025-09-11 01:13:11.070912 | 2025-09-11 01:13:11.071063 | LOOP [stage-output : Copy files and folders to staging folder] 2025-09-11 01:13:11.109797 | 2025-09-11 01:13:11.110055 | TASK [stage-output : Make all log files readable] 2025-09-11 01:13:11.373103 | orchestrator | ok 2025-09-11 01:13:11.383810 | 2025-09-11 01:13:11.383947 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-09-11 01:13:11.419094 | orchestrator | skipping: Conditional result was False 2025-09-11 01:13:11.436161 | 2025-09-11 01:13:11.436373 | TASK [stage-output : Discover log files for compression] 2025-09-11 01:13:11.461623 | orchestrator | skipping: Conditional result was False 2025-09-11 01:13:11.474975 | 2025-09-11 01:13:11.475121 | LOOP [stage-output : Archive everything from logs] 2025-09-11 01:13:11.518199 | 2025-09-11 01:13:11.518380 | PLAY [Post cleanup play] 2025-09-11 01:13:11.526206 | 2025-09-11 01:13:11.526336 | TASK [Set cloud fact (Zuul deployment)] 2025-09-11 01:13:11.578283 | orchestrator | ok 2025-09-11 01:13:11.588283 | 2025-09-11 01:13:11.588371 | TASK [Set cloud fact (local deployment)] 2025-09-11 01:13:11.611704 | orchestrator | skipping: Conditional result was False 2025-09-11 01:13:11.627381 | 2025-09-11 01:13:11.627517 | TASK [Clean the cloud environment] 2025-09-11 01:13:12.138528 | orchestrator | 2025-09-11 01:13:12 - clean up servers 2025-09-11 01:13:12.862871 | orchestrator | 2025-09-11 01:13:12 - testbed-manager 2025-09-11 01:13:12.944587 | orchestrator | 2025-09-11 01:13:12 - testbed-node-3 2025-09-11 01:13:13.039272 | orchestrator | 2025-09-11 01:13:13 - testbed-node-2 2025-09-11 01:13:13.147862 | orchestrator | 2025-09-11 01:13:13 - testbed-node-0 2025-09-11 01:13:13.247401 | orchestrator | 2025-09-11 01:13:13 - testbed-node-1 2025-09-11 01:13:13.351510 | orchestrator | 2025-09-11 01:13:13 - testbed-node-4 2025-09-11 01:13:13.443965 | orchestrator | 2025-09-11 01:13:13 - testbed-node-5 2025-09-11 01:13:13.533734 | orchestrator | 2025-09-11 01:13:13 - clean up keypairs 2025-09-11 01:13:13.555385 | orchestrator | 2025-09-11 01:13:13 - testbed 2025-09-11 01:13:13.581505 | orchestrator | 2025-09-11 01:13:13 - wait for servers to be gone 2025-09-11 01:13:22.401435 | orchestrator | 2025-09-11 01:13:22 - clean up ports 2025-09-11 01:13:22.575850 | orchestrator | 2025-09-11 01:13:22 - 5e96e4cc-26ff-45e4-895e-c4f64b066888 2025-09-11 01:13:22.972850 | orchestrator | 2025-09-11 01:13:22 - 77536e8d-b160-4a50-9df6-0e6ba4567e62 2025-09-11 01:13:23.251648 | orchestrator | 2025-09-11 01:13:23 - 80e23495-4ac5-488d-88b0-8f66b859d3ff 2025-09-11 01:13:23.475028 | orchestrator | 2025-09-11 01:13:23 - 9010f73a-c3fd-4ff9-9944-0469b01464c9 2025-09-11 01:13:23.690145 | orchestrator | 2025-09-11 01:13:23 - e8be9497-141d-4f79-b182-9327f13587ad 2025-09-11 01:13:23.910816 | orchestrator | 2025-09-11 01:13:23 - efe5a958-2a56-4f86-84aa-df389a58acc9 2025-09-11 01:13:24.156615 | orchestrator | 2025-09-11 01:13:24 - fa1ed823-68b3-43d8-b642-0124c9dfc131 2025-09-11 01:13:24.536623 | orchestrator | 2025-09-11 01:13:24 - clean up volumes 2025-09-11 01:13:24.650944 | orchestrator | 2025-09-11 01:13:24 - testbed-volume-1-node-base 2025-09-11 01:13:24.688558 | orchestrator | 2025-09-11 01:13:24 - testbed-volume-4-node-base 2025-09-11 01:13:24.727331 | orchestrator | 2025-09-11 01:13:24 - testbed-volume-5-node-base 2025-09-11 01:13:24.773697 | orchestrator | 2025-09-11 01:13:24 - testbed-volume-2-node-base 2025-09-11 01:13:24.816823 | orchestrator | 2025-09-11 01:13:24 - testbed-volume-manager-base 2025-09-11 01:13:24.865971 | orchestrator | 2025-09-11 01:13:24 - testbed-volume-3-node-base 2025-09-11 01:13:25.004505 | orchestrator | 2025-09-11 01:13:25 - testbed-volume-0-node-base 2025-09-11 01:13:25.067362 | orchestrator | 2025-09-11 01:13:25 - testbed-volume-2-node-5 2025-09-11 01:13:25.120266 | orchestrator | 2025-09-11 01:13:25 - testbed-volume-7-node-4 2025-09-11 01:13:25.166695 | orchestrator | 2025-09-11 01:13:25 - testbed-volume-6-node-3 2025-09-11 01:13:25.206646 | orchestrator | 2025-09-11 01:13:25 - testbed-volume-3-node-3 2025-09-11 01:13:25.248087 | orchestrator | 2025-09-11 01:13:25 - testbed-volume-1-node-4 2025-09-11 01:13:25.286593 | orchestrator | 2025-09-11 01:13:25 - testbed-volume-8-node-5 2025-09-11 01:13:25.328343 | orchestrator | 2025-09-11 01:13:25 - testbed-volume-5-node-5 2025-09-11 01:13:25.369908 | orchestrator | 2025-09-11 01:13:25 - testbed-volume-0-node-3 2025-09-11 01:13:25.585611 | orchestrator | 2025-09-11 01:13:25 - testbed-volume-4-node-4 2025-09-11 01:13:25.626328 | orchestrator | 2025-09-11 01:13:25 - disconnect routers 2025-09-11 01:13:25.745878 | orchestrator | 2025-09-11 01:13:25 - testbed 2025-09-11 01:13:26.640178 | orchestrator | 2025-09-11 01:13:26 - clean up subnets 2025-09-11 01:13:26.677243 | orchestrator | 2025-09-11 01:13:26 - subnet-testbed-management 2025-09-11 01:13:26.856485 | orchestrator | 2025-09-11 01:13:26 - clean up networks 2025-09-11 01:13:27.025977 | orchestrator | 2025-09-11 01:13:27 - net-testbed-management 2025-09-11 01:13:27.309103 | orchestrator | 2025-09-11 01:13:27 - clean up security groups 2025-09-11 01:13:27.347797 | orchestrator | 2025-09-11 01:13:27 - testbed-node 2025-09-11 01:13:27.455385 | orchestrator | 2025-09-11 01:13:27 - testbed-management 2025-09-11 01:13:27.569913 | orchestrator | 2025-09-11 01:13:27 - clean up floating ips 2025-09-11 01:13:27.606513 | orchestrator | 2025-09-11 01:13:27 - 81.163.192.14 2025-09-11 01:13:27.944689 | orchestrator | 2025-09-11 01:13:27 - clean up routers 2025-09-11 01:13:28.045582 | orchestrator | 2025-09-11 01:13:28 - testbed 2025-09-11 01:13:29.175043 | orchestrator | ok: Runtime: 0:00:16.993446 2025-09-11 01:13:29.179541 | 2025-09-11 01:13:29.179700 | PLAY RECAP 2025-09-11 01:13:29.179803 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-09-11 01:13:29.179853 | 2025-09-11 01:13:29.312475 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-11 01:13:29.314900 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-11 01:13:30.029488 | 2025-09-11 01:13:30.029647 | PLAY [Cleanup play] 2025-09-11 01:13:30.045468 | 2025-09-11 01:13:30.045593 | TASK [Set cloud fact (Zuul deployment)] 2025-09-11 01:13:30.096918 | orchestrator | ok 2025-09-11 01:13:30.103645 | 2025-09-11 01:13:30.103774 | TASK [Set cloud fact (local deployment)] 2025-09-11 01:13:30.138201 | orchestrator | skipping: Conditional result was False 2025-09-11 01:13:30.158421 | 2025-09-11 01:13:30.158573 | TASK [Clean the cloud environment] 2025-09-11 01:13:31.335807 | orchestrator | 2025-09-11 01:13:31 - clean up servers 2025-09-11 01:13:31.809010 | orchestrator | 2025-09-11 01:13:31 - clean up keypairs 2025-09-11 01:13:31.826805 | orchestrator | 2025-09-11 01:13:31 - wait for servers to be gone 2025-09-11 01:13:31.873059 | orchestrator | 2025-09-11 01:13:31 - clean up ports 2025-09-11 01:13:31.951044 | orchestrator | 2025-09-11 01:13:31 - clean up volumes 2025-09-11 01:13:32.010866 | orchestrator | 2025-09-11 01:13:32 - disconnect routers 2025-09-11 01:13:32.038693 | orchestrator | 2025-09-11 01:13:32 - clean up subnets 2025-09-11 01:13:32.064169 | orchestrator | 2025-09-11 01:13:32 - clean up networks 2025-09-11 01:13:32.199243 | orchestrator | 2025-09-11 01:13:32 - clean up security groups 2025-09-11 01:13:32.237073 | orchestrator | 2025-09-11 01:13:32 - clean up floating ips 2025-09-11 01:13:32.261485 | orchestrator | 2025-09-11 01:13:32 - clean up routers 2025-09-11 01:13:32.705921 | orchestrator | ok: Runtime: 0:00:01.341614 2025-09-11 01:13:32.709912 | 2025-09-11 01:13:32.710076 | PLAY RECAP 2025-09-11 01:13:32.710200 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-09-11 01:13:32.710315 | 2025-09-11 01:13:32.841099 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-11 01:13:32.843607 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-11 01:13:33.565939 | 2025-09-11 01:13:33.566087 | PLAY [Base post-fetch] 2025-09-11 01:13:33.581466 | 2025-09-11 01:13:33.581588 | TASK [fetch-output : Set log path for multiple nodes] 2025-09-11 01:13:33.636806 | orchestrator | skipping: Conditional result was False 2025-09-11 01:13:33.650156 | 2025-09-11 01:13:33.650390 | TASK [fetch-output : Set log path for single node] 2025-09-11 01:13:33.696278 | orchestrator | ok 2025-09-11 01:13:33.705073 | 2025-09-11 01:13:33.705205 | LOOP [fetch-output : Ensure local output dirs] 2025-09-11 01:13:34.189100 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/111b3509201f44bf8eed852029dc6ac2/work/logs" 2025-09-11 01:13:34.472113 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/111b3509201f44bf8eed852029dc6ac2/work/artifacts" 2025-09-11 01:13:34.747091 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/111b3509201f44bf8eed852029dc6ac2/work/docs" 2025-09-11 01:13:34.770742 | 2025-09-11 01:13:34.770949 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-09-11 01:13:35.676052 | orchestrator | changed: .d..t...... ./ 2025-09-11 01:13:35.676445 | orchestrator | changed: All items complete 2025-09-11 01:13:35.676500 | 2025-09-11 01:13:36.408169 | orchestrator | changed: .d..t...... ./ 2025-09-11 01:13:37.159164 | orchestrator | changed: .d..t...... ./ 2025-09-11 01:13:37.191319 | 2025-09-11 01:13:37.191456 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-09-11 01:13:37.229987 | orchestrator | skipping: Conditional result was False 2025-09-11 01:13:37.232731 | orchestrator | skipping: Conditional result was False 2025-09-11 01:13:37.246851 | 2025-09-11 01:13:37.246947 | PLAY RECAP 2025-09-11 01:13:37.247008 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-09-11 01:13:37.247042 | 2025-09-11 01:13:37.366653 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-11 01:13:37.368795 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-11 01:13:38.095654 | 2025-09-11 01:13:38.095807 | PLAY [Base post] 2025-09-11 01:13:38.109843 | 2025-09-11 01:13:38.109967 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-09-11 01:13:39.091653 | orchestrator | changed 2025-09-11 01:13:39.102094 | 2025-09-11 01:13:39.102234 | PLAY RECAP 2025-09-11 01:13:39.102327 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-09-11 01:13:39.102402 | 2025-09-11 01:13:39.228914 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-11 01:13:39.231077 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-09-11 01:13:40.012977 | 2025-09-11 01:13:40.013142 | PLAY [Base post-logs] 2025-09-11 01:13:40.023702 | 2025-09-11 01:13:40.023836 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-09-11 01:13:40.456485 | localhost | changed 2025-09-11 01:13:40.467207 | 2025-09-11 01:13:40.467402 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-09-11 01:13:40.503179 | localhost | ok 2025-09-11 01:13:40.506545 | 2025-09-11 01:13:40.506647 | TASK [Set zuul-log-path fact] 2025-09-11 01:13:40.521747 | localhost | ok 2025-09-11 01:13:40.531546 | 2025-09-11 01:13:40.531670 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-11 01:13:40.568633 | localhost | ok 2025-09-11 01:13:40.575766 | 2025-09-11 01:13:40.575938 | TASK [upload-logs : Create log directories] 2025-09-11 01:13:41.069645 | localhost | changed 2025-09-11 01:13:41.072550 | 2025-09-11 01:13:41.072662 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-09-11 01:13:41.568685 | localhost -> localhost | ok: Runtime: 0:00:00.007154 2025-09-11 01:13:41.575196 | 2025-09-11 01:13:41.575423 | TASK [upload-logs : Upload logs to log server] 2025-09-11 01:13:42.120539 | localhost | Output suppressed because no_log was given 2025-09-11 01:13:42.123328 | 2025-09-11 01:13:42.123481 | LOOP [upload-logs : Compress console log and json output] 2025-09-11 01:13:42.177186 | localhost | skipping: Conditional result was False 2025-09-11 01:13:42.181996 | localhost | skipping: Conditional result was False 2025-09-11 01:13:42.195530 | 2025-09-11 01:13:42.195752 | LOOP [upload-logs : Upload compressed console log and json output] 2025-09-11 01:13:42.241377 | localhost | skipping: Conditional result was False 2025-09-11 01:13:42.241932 | 2025-09-11 01:13:42.245481 | localhost | skipping: Conditional result was False 2025-09-11 01:13:42.260039 | 2025-09-11 01:13:42.260361 | LOOP [upload-logs : Upload console log and json output]